v3.15.0 Deployment Instructions - OpenWIS/openwis GitHub Wiki
The OpenWIS Installation Guide aims at providing the various steps to install and configure the OpenWIS components. This guide is provided for system administrator as most of the operations require root access.
The Installation Guide describes the installation and configuration process for each OpenWIS component in dedicated sections.
A full distribution of OpenWIS consists of the following artefacts. These are either available from Jenkins or can be built from source:
- openwis-securityservice.war: Security service endpoint.
- openwis-management-service.ear: Management services
- openwis-dataservice.ear: Data Services
- openwis-user-portal.war: Public user portal
- openwis-admin-portal.war: Administration portal
- stagingPost.war: Staging post
- PopulateLDAP.zip: A tool used during installation to populate users and groups in OpenAM.
- openwis-dependencies.zip: An archive containing various dependencies used by OpenWIS.
OpenWIS requires the following dependencies.
- OpenAM 14.5.1
- OpenDJ 4.4.5
In addition to these, the following system and 3rd party dependencies may be required for specific components:
- OpenJDK 1.8
- Wildfly 19.1.0.final
- Tomcat 9.0.33
- vsftpd
- httpd
From an installation point of view, deploying a GISC, a DCPC or a NC is equivalent. The only differences reside in configuration (described below).
Each deployment consists in deploying the following components:
- Security Service (OpenDJ, OpenAM and OpenWIS WebServices)
- Database (PostgreSQL)
- Standalone Solr
- Management Service
- Data Service + Cache replication (GISC only)
- Portals and front-end
- Staging Post
OpenWIS v3.15 installation structure example:
It requires 4 tomcat instances. If all of Tomcat instances are at the same OS they must run under different user.
For maintenance or migration purpose, the whole system will need to be restarted. To avoid losing any data / requests, the following process may be performed:
- On Admin portal: stop all local services (in Backup / Availability of Local System) This will prevent any users to perform new requests and the system to ingest and process new subscriptions.
- Wait for all JMS queues to go down to 0 (see section 5.6.1) This will wait for current processing to complete.
- Stop servers (only if necessary) in this order:
- User Portal(s)
- Admin Portal
- Solr
- Security Service
- Data Service
- Postgres
- Perform the maintenance operations
- Start the servers in reverse order (only for servers that have been stopped):
- Postgres
- Data Service
- Security Service
- Solr
- Admin Portal
- User Portal(s)
All OpenWIS components can be installed on RedHat 7. The following system configuration is required:
- Time set in UTC and synchronized with NTP on an external time server
- English language (required by OpenAM installer)
- Firewall settings to allow communication between components
The first point is to ensure that the hostname is properly set.
Make sure you are logged in as root.
cd /etc/sysconfig
vi network
Look for the HOSTNAME line and replace it with the new hostname you want to use.
HOSTNAME=<HOSTNAME>
Next we will edit the /etc/hosts file and set the new hostname.
vi /etc/hosts
The changes to /etc/hosts and /etc/sysconfig/network are necessary to make your changes persistent (in the event of an unscheduled reboot). Now we use the hostname program to change the hostname that is currently set.
hostname <HOSTNAME>
And run it again without any parameters to see if the hostname has changed. hostname Finally we will restart the network to apply the changes we made to /etc/hosts and /etc/sysconfig/network.
service network restart
As root, create a user “openwis”, with the default home directory (/home/openwis).:
useradd openwis
passwd openwis
Changing password for user openwis…
The following Java version is required: OpenJDK 1.8.0
Java installation with yum:
As root:
yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel
If java is already installed, check that java-1.8.0-openjdk-devel.x86_64 is installed (jdk is required for maven compilation) and choose Java 1.8 version as root, using:
alternatives --config java
As openwis,
Set java environment variables. Add these lines in .bash_profile files:
JAVA_HOME=/etc/alternatives/jre_1.8.0_openjdk
export JAVA_HOME
Apply:
source .bash_profile
Verify:
echo $JAVA_HOME
and
java -version
- In this document java 1.8 release is : 1.8.0_275
If git is not installed, run as root:
yum install git-all
Note: OpenWIS versions may require older version of maven. See: Getting-Started#installing-apache-maven
As root:
yum install -y wget unzip
cd /opt/
wget https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.zip
unzip apache-maven-3.6.3-bin.zip
rm apache-maven-3.6.3-bin.zip
and as user:
cd ~
vim .bash_profile
Add:
export M2_HOME=/opt/apache-maven-3.6.3
export M2=$M2_HOME/bin
export PATH=$M2:$PATH
If needed:
export JAVA_OPTS="-Xms1g -Xmx2g -XX:MaxPermSize=2g"
Apply:
source .bash_profile
Verify:
mvn -version
Outputs:
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /opt/apache-maven-3.6.3
Java version: 1.8.0_242, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre
Default locale: fr_FR, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-957.21.3.el7.x86_64", arch: "amd64", family: "unix"
Maximum Number of Open File Descriptors
OpenWIS system makes intensive use of file access and network connections.
By default, RedHat limits the number of open file descriptor to 1024 by user, which may appear insufficient in case of burst of files or service to process.
Beyond this limit, errors occur in the system such as “Too many open files” or “cannot open connection” to the database, which may cause unexpected and unpredictable problems.
All the components are running as ‘openwis’ user or ‘postgres’ user (database only).
It’s then recommended to increase this limit on all the component hosts.
Test:
ulimit
If limit is below 8192 then configure as root the file /etc/security/limits.conf (system needs to be restarted):
openwis - nofile 8192
postgres - nofile 8192
To check this limit has been applied, log in as openwis or postgres user and check:
ulimit
Should show: 8192
Clone OpenWIS in ~/maven_projects directory:
As openwis,
mkdir maven_projects
cd maven_projects
git clone https://github.com/OpenWIS/openwis.git
cd openwis
git checkout openwis-3.15
mvn clean install -Pdependencies,openwis,admin
mvn clean install -Puser
The tests could be skipped with option:
-DskipTests=true
Copy and unzip openwis-dependencies/target/openwis-dependencies.zip to /home/openwis:
cp maven_projects/openwis/openwis-dependencies/target/openwis-dependencies.zip ~
unzip openwis-dependencies.zip
cd ~/openwis-dependencies/data-management-services/
unzip openwis-dataservice-config-files.zip
*Maven build likewise Windows, see Getting-Started Wiki
Note: For this installation the minimum required Tomcat instances is 3
- For Security service and utils
- For Admin portal
- For User portal
Each must be installed *under different user, under different Home and path. Following chapter details installation of 1 instance of Apache Tomcat.
You can download a copy of Tomat 9.0.41 from the project website (or one of the mirrors):
As openwis,
wget https://downloads.apache.org/tomcat/tomcat-9/v9.0.41/bin/apache-tomcat-9.0.41.tar.gz
Once downloaded, perform the following commands in the openwis home directory:
As openwis,
tar -xzf apache-tomcat-9.0.41.tar.gz
chmod a+x apache-tomcat-9.0.41/bin/*.sh
If server is just installed as application: The provided scripts (@/openwis-dependencies/portals) can be used from user to start and stop Tomcat:
As openwis
cp ~/openwis-dependencies/portals/*.sh .
chmod a+x *.sh
Note start/stop scripts update may needed, $CATALINA_HOME must point to Tomcat home instance. Start script example:
#!/bin/sh
#
# Start OpenWIS Tomcat
#
# Settings
export CATALINA_OPTS="-Xmx512m -XX:MaxPermSize=256m"
export CATALINA_HOME=/home/openwis/apache-tomcat-9.0.41
export CATALINA_PID=$CATALINA_HOME/openwis-tomcat.pid
# Check if Tomcat did not crash (with PID file)
if [ -e $CATALINA_PID ]
then
ps -p `cat $CATALINA_PID` &> /dev/null
if [ $? = 0 ]
then
echo "Tomcat still running"
exit 0
else
echo "Tomcat crashed, cleaning remaining PID file"
rm $CATALINA_PID
fi
fi
# Start Tomcat
cd $CATALINA_HOME/bin
./startup.sh
- Memory adjustment
By default, Tomcat is configured to use a maximum of 512 Mbytes of memory for heap space.
If the host allows more memory allocation, it is recommended to increase this maximum value.
Edit start_openwis_tomcat.sh:
export CATALINA_OPTS="-Xmx512m -XX:MaxPermSize=256m"
The following parameters may be adjusted: -Xmx represents the maximum of memory usage (‘m’ means Mbytes)
-
Modify the script "tail_tomcat_log.sh" with the correct tomcat version:
tail -f ~/apache-tomcat-9.0.41/logs/catalina.out
-
Stop/start tomcat:
As openwis
~/start_openwis_tomcat.sh
~/stop_openwis_tomcat.sh
-
To verify Tomcat state, check catalina.out file:
less /home/openwis/apache-tomcat-9.0.41/logs/catalina.out
or:
~/tail_tomcat_log.sh
Tomcat application server may added as system service.
To add Tomcat as a service to start at boot, edit as root:
cd /etc/systemd/system/
vim tomcat.service
Create the 'tomcat.service' file with the following lines:
[Unit]
Description=Apache Tomcat Web Application Server
After=syslog.target network.target
[Service]
Type=forking
User=openwis
Environment="JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk/"
Environment="CATALINA_HOME=/home/openwis/apache-tomcat-9.0.41"
Environment="CATALINA_PID=/home/openwis/apache-tomcat-9.0.41/openwis-tomcat.pid"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"
ExecStart=/home/openwis/apache-tomcat-9.0.41/bin/startup.sh
ExecStop=/home/openwis/apache-tomcat-9.0.41/bin/shutdown.sh
[Install]
WantedBy=multi-user.target
*CATALINA_HOME, JAVA_HOME and process name may differ dependant on servers purpose.
then :
systemctl daemon-reload
- Tomcat stop/start as service
As root
systemctl start tomcat
systemctl stop tomcat
-
Enable the service to be automatically started at boot:
systemctl enable tomcat
If required to install more than one instance of Tomcat in the same machine
In any case multiple tomcats are added as services, user, processname and paths must not be the same.
Follow 2.7 set different $CATALINA_HOME (and $JAVA_HOME if needed)
then copy scripts and rename them accordingly.
Ex: start_openwis_portals.sh
Edit $CATALINA_HOME value to path of new Tomcat.
-
Locate server.xml in {Tomcat homr}\ conf \
-
Find following similar statements
<Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/>
-
About Tomcat’s server.xml file cites it’s runs on port 8080. Change every connector configuration port: Connector port=”8080″ to any other port number. For example:
-
Edit and save the server.xml file. Restart Tomcat. Done
If required to run a Tomcat which must run under different Java than default edit start/stop scripts to use the other Java home
As openwis user:
Download the version of WildFly 19.1.0.final from the Wildfly website: https://www.wildfly.org/downloads/
cd ~
wget https://download.jboss.org/wildfly/19.1.0.Final/wildfly-19.1.0.Final.tar.gz
Unzip the file in the openwis home directory:
tar -xzf wildfly-19.1.0.Final.tar.gz
Edit ~/.bash_profile and add the following lines:
export JBOSS_HOME=/home/openwis/wildfly-19.1.0.Final
Save and execute
source ~/.bash_profile
Create link to JBOSS_HOME:
ln -s /home/openwis/wildfly-19.1.0.Final wildfly
JBoss is configured to use a maximum of 512 Mbytes of heap space by default. If the host allows more memory allocation, it is recommended to increase this maximum value.
The following parameters may be adjusted:
- -Xmx represents the maximum of memory usage (‘m’ means Mbytes)
Edit $JBOSS_HOME/bin/standalone.conf to adjust $JAVA_OPTS values:
JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
values example :
JAVA_OPTS="-Xms1024m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=768m"
*XX:MaxMetaspaceSize will be increase in case of "ERROR: java.lang.OutOfMemoryError: Metadata space"
The time-zone used by the JVM will need to be explicitly set to UTC. Also add at the end of the file the following line:
JAVA_OPTS=$JAVA_OPTS -Duser.timezone=UTC
The server logs will be written to $JBOSS_HOME/standalone/log/ . If you want to have the logs written to another location, you can replace the empty log directory with a symbolic link to desired log directory:
ln -s $JBOSS_HOME/standalone/log <target log dir>
As user copy WildFly stop/start scripts to user home :
cd ~
cp ~/openwis-dependencies/data-management-services/openwis-dataservice-config/*.sh ~
chmod a+x *.sh
The following configuration values for WildFly are configured as part of the start script and can be changed if necessary: If so, open start_openwis_jboss.sh in a text editor and change the appropriate configuration value.
• bindingAddress: The address WildFly will be bound to. Default = “0.0.0.0”
• multicastAddress: The multicast address used for communication between cluster members. All cluster members must have the same multicast address. Default = “239.255.100.100”
The provided scripts are used from openwis user to start and stop WildFly:
As openwis
./start_openwis_jboss.sh
./stop_openwis_jboss.sh
To verify Wildfly running, check server.log file :
tail -f $JBOSS_HOME/standalone/log/server.log
Wildfly application server may be added as systemd service.
- To add Wildfly as a service, there are 3 files to create/configure.
As root:
wildfly.conf
mkdir -p /etc/wildfly
vim /etc/wildfly/wildfly.conf
Add:
# The configuration you want to run
WILDFLY_CONFIG=standalone-full.xml
# The mode you want to run
WILDFLY_MODE=standalone
# The address to bind to
WILDFLY_BIND=0.0.0.0
launch.sh
vim /home/openwis/wildfly-19.1.0.Final/bin/launch.sh
Add:
#!/bin/bash
echo "Start Wildfly"
echo "Configuration file: $2"
echo "Bind address: $3"
CONF_FILE="$2"
if [ "x$CONF_FILE" = "x" ]; then
CONF_FILE="standalone-full.xml"
fi
if [ "x$OPENWIS_HOME" = "x" ]; then
OPENWIS_HOME="/home/openwis"
fi
if [ "x$WILDFLY_HOME" = "x" ]; then
WILDFLY_HOME="$OPENWIS_HOME/wildfly"
fi
if [[ "$1" == "domain" ]]; then
$WILDFLY_HOME/bin/domain.sh -c $2 -b $3
else
$WILDFLY_HOME/bin/standalone.sh -c $CONF_FILE
fi
Make the file executable:
chmod a+x /home/openwis/wildfly-19.1.0.Final/bin/launch.sh
wildfly.service
vim /etc/systemd/system/wildfly.service
Add:
[Unit]
Description=The WildFly Application Server
After=syslog.target network.target
[Service]
Environment=LAUNCH_JBOSS_IN_BACKGROUND=1
EnvironmentFile=-/etc/wildfly/wildfly.conf
User=openwis
LimitNOFILE=102642
PIDFile=/var/run/wildfly/wildfly.pid
ExecStart=/home/openwis/wildfly/bin/launch.sh $WILDFLY_MODE $WILDFLY_CONFIG $WILDFLY_BIND
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Notify systemd for the new unit file:
sudo systemctl daemon-reload
Start the wildfly service:
systemctl start wildfly
Check the service status:
systemctl status wildfly
Status output:
[root@linux]# systemctl status wildfly
● wildfly.service - The WildFly Application Server
Loaded: loaded (/etc/systemd/system/wildfly.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-06-11 20:01:18 CEST; 18min ago
Main PID: 4124 (launch.sh)
CGroup: /system.slice/wildfly.service
├─4124 /bin/bash /home/openwis/wildfly/bin/launch.sh standalone standalone-full.xml 0.0.0.0
├─4125 /bin/sh /home/openwis/wildfly/bin/standalone.sh -c standalone-full.xml -b 0.0.0.0
└─4182 java -D[Standalone] -server -Xms1024m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=768m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.a...
Log server.log output
...
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 19.1.0.Final (WildFly Core 7.0.0.Final) started in 16621ms - Started 2536 of 2725 services (403 services are lazy, passive or on-demand)
Stop the wildfly service:
systemctl stop wildfly
-
Enable the service to be automatically started at boot:
systemctl enable wildfly
Check wildfly systemd log:
journalctl -u wildfly
Or, in order to see the last info:
journalctl -u wildfly --since "5 minutes ago"
To check how much disk space is currently taken up by the journal, use
journalctl --disk-usage
Output:
Archived and active journals take up 104.0M on disk.
To delete archived journal entries manually, you can use either the –vacuum-size or the –vacuum-time option.
In the example below, we are deleting any archived journal files, so the journal size comes back from 104 to 72 MB (depending of size and the number of journal files in "/run/log/journal/" ).
journalctl --vacuum-size=50M
Output:
Deleted archived journal /run/log/journal/5d2575b95e0c432dac2a14103fc1ca5f/system@23ab0e7cb99f49c28f2714139d8bdb48-0000000000000001-0005a7ce8ad18545.journal (32.0M).
Vacuuming done, freed 32.0M of archived journals on disk.
To delete the archived journals (the only active journal is kept):
journalctl -m --vacuum-time=1s
Output:
Deleted archived journal /run/log/journal/5d2575b95e0c432dac2a14103fc1ca5f/system@23ab0e7cb99f49c28f2714139d8bdb48-000000000000784f-0005a888c3f15e2a.journal (24.0M).
Deleted archived journal /run/log/journal/5d2575b95e0c432dac2a14103fc1ca5f/system@23ab0e7cb99f49c28f2714139d8bdb48-000000000000de9b-0005a922dbc5631e.journal (24.0M).
Vacuuming done, freed 48.0M of archived journals on disk.
Check the size of the active journal:
journalctl --disk-usage
Output:
Archived and active journals take up 24.0M on disk.
Reference source: www.loggly.com
Optional: redirect output of systemd service to a file
Replace in /etc/systemd/system/wildfly.service
, StandardOutput=journal
with the 3 following lines:
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=wildfly
Then:
sudo systemctl daemon-reload
Create the file /etc/rsyslog.d/wildfly.conf
:
vim /etc/rsyslog.d/wildfly.conf
Add:
if $programname == 'wildfly' then /var/log/wildfly/wildfly.log
& stop
Then:
systemctl restart rsyslog
Then, a copy of the wildfly log is located in /var/log/wildfly/ :
ll /var/log/wildfly/
total 224
-rw-r--r-- 1 root root 11190 Apr 9 10:50 console.log
-rw------- 1 root root 216544 Jun 11 20:26 wildfly.log
⚠️ Notice: /var/log/wildfly/wildfly.log is one another log file for wildfly. The data and management service logfile is a heavy file, especially in GISC case, then it's necessary to monitor carefully the evolution of the size of the different file systems and to adjust crontab if necessary.
Dataservice deployment is described in chapter 5
The Security Service deployment is composed of:
- OpenDJ (LDAP server)
- OpenAM
- OpenWIS Security Services (User & Group management)
openJDK 8 is required.
Note: This has been developed and tested using openjdk version "1.8.0_252"
At the following part there are lots of configurations to be done.
- Tomcat logs (OpenDJ, OpenAM, OpenWIS Security Services)
- JBoss logs (openwis-dataservice, openwis-management-service)
- OpenDJ logs located at:
/home/openwis/opendj/logs
- Openam logs located at:
/home/openwis/openam/log
or/home/openwis/openam/openam/debug
The following sections describe how to install OpenDJ 4.4.5.
Download OpenDJ-.zip:
cd ~
wget https://github.com/OpenIdentityPlatform/OpenDJ/releases/download/4.4.5/opendj-4.4.5.zip
As openwis user, unzip this Archive in /home/openwis directory.
unzip OpenDJ-4.4.5.zip
Run OpenDJ setup:
cd opendj
for a GUI installation
./setup
The splash screen is displayed.
The Welcome page is displayed, click Next
Enter the hostname, the ldap listener port, the administration connector port and login/password for root User DN
Select Standalone server (default)
Set Directory Base DN to dc=opensso,dc=java,dc=net and select in Directory Data menu, "Only create Base Entry"
Leave default runtime settings, click Next.
Click on Finish
Installation is processing
Installation completed.
Then check if opendj is started. If not, start opendj server:
cd ~/opendj/bin
./start-ds
See 3.1.3 section for more information about OpenDJ stop/start
OpenDJ 4.4.5 is now installed and ready for OpenAM installation.
- Interactive mode installation
Launch the setup:
unzip -qq OpenDJ-4.4.5.zip
cd ~/opendj
./setup --cli
console display:
What would you like to use as the initial root user DN for the Directory
Server? [cn=Directory Manager]:
Please provide the password to use for the initial root user:
Please re-enter the password for confirmation:
Provide the fully-qualified directory server host name that will be used when
generating self-signed certificates for LDAP SSL/StartTLS, the administration
connector, and replication [wiam31-sidev.meteo.fr]:
On which port would you like the Directory Server to accept connections from
LDAP clients? [1389]:
On which port would you like the Administration Connector to accept
connections? [4444]:
Do you want to create base DNs in the server? (yes / no) [yes]:
Provide the backend type:
1) JE Backend
2) PDB Backend
Enter choice [1]:
Provide the base DN for the directory data: [dc=example,dc=com]: dc=opensso,dc=java,dc=net
Options for populating the database:
1) Leave the database empty
2) Only create the base entry
3) Import data from an LDIF file
4) Load automatically-generated sample data
Enter choice [1]: 2
Do you want to enable SSL? (yes / no) [no]:
Do you want to enable Start TLS? (yes / no) [no]:
Do you want to start the server when the configuration is completed? (yes /
no) [yes]:
Setup Summary
=============
LDAP Listener Port: 1389
Administration Connector Port: 4444
JMX Listener Port:
LDAP Secure Access: disabled
Root User DN: cn=Directory Manager
Directory Data: Backend Type: JE Backend
Create New Base DN dc=opensso,dc=java,dc=net
Base DN Data: Only Create Base Entry (dc=opensso,dc=java,dc=net)
Start Server when the configuration is completed
What would you like to do?
1) Set up the server with the parameters above
2) Provide the setup parameters again
3) Print equivalent non-interactive command-line
4) Cancel and exit
Enter choice [1]:
Configuring Directory Server ..... Done.
Creating Base Entry dc=opensso,dc=java,dc=net ..... Done.
Starting Directory Server ...... Done.
Then check if opendj is started. If not, start opendj server:
cd ~/opendj/bin
./start-ds
To start OpenDJ:
cd opendj/bin
./start-ds
or
~/opendj/bin/start-ds
To stop OpenDJ:
cd opendj/bin
./stop-ds
or
~/opendj/bin/stop-ds
Opendj application server may be added as system service.
To add Opendj as a service to start at boot, edit as root:
cd /etc/systemd/system/
vim opendj.service
Create the 'opendj.service' file like the following example:
[Unit]
SourcePath=/home/openwis/opendj/bin
Description=OpenDJ Server (systemd init)
Before=runlevel2.target runlevel3.target runlevel4.target runlevel5.target shutdown.target display-manager.service
After=
Conflicts=shutdown.target
[Service]
Type=simple
User=openwis
Restart=no
TimeoutSec=5min
IgnoreSIGPIPE=no
KillMode=process
GuessMainPID=no
RemainAfterExit=yes
ExecStart=/home/openwis/opendj/bin/start-ds
ExecStop=/home/openwis/opendj/bin/stop-ds
[Install]
WantedBy=multi-user.target
*SourcePath and process name may differ dependant on installation choices.
then :
systemctl daemon-reload
- opendj stop/start as service
As root
systemctl stop opendj
systemctl start opendj
- Enable the service to be automatically started at boot:
As root
systemctl enable opendj
via GUI console:
~/opendj/bin/control-panel&
via command line:
~/opendj/bin/status
OpenDJ status results at the end of OpenWIS installation
Import SchemaOpenWIS.ldif
The 3.15 schema is located at:
and
As user:
cd /home/openwis/opendj/bin
./ldapmodify -h localhost -p 4444 -D "cn=Directory Manager" -w <LDAP_PASSWORD> -a -X --useSSL -f /home/openwis/maven_projects/openwis/openwis-securityservice/openwis-securityservice-war/src/main/config/SchemaOpenWIS.ldif
Output:
Note: the configuration of the password policy can be found here: https://github.com/OpenWIS/openwis/blob/openwis-3.15/docs/opendj-doc.md
Displaying the default password policy :
cd ~/opendj/bin
./dsconfig -D "cn=directory manager" -w <rootUserPassword> -n get-password-policy-prop --policy-name "Default Password Policy"
Note: If replication is active the schema must be replicated on each instance (to have all the attributes of OPENWIS)
The following sections describe how to install release 14.5.1
OpenAM-14.5.1 is deployed in Tomcat 9
. Report to section 2.7 for Tomcat installation details.
Configure Tomcat 9 for your environment:
- Start/Stop scripts to adapt
- Include UTF-8 configuration
- Include VM memory settings
- Tomcat failover consideration:
If failover is configured for component deployed in Tomcat 9 (such as OpenAM), the deployment of the component will be done on two Tomcat instances that run simultaneously.
The Web accessed will be done via a load balancer, such as Apache front-end with mod_proxy module (described in a dedicated section).
To enable session affinity configuration on the load balancer, the Tomcat instances needs to be configured:
vi /home/openwis/apache-tomcat-9.0.33/conf/server.xml
Set the jvmRoute attribute on Engine element:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">
Download OpenAM war from the middleware repository:
https://github.com/OpenIdentityPlatform/OpenAM/releases/tag/14.5.1
Deploy openam.war to the Tomcat container:
mkdir /home/openwis/apache-tomcat-9.0.33/webapps/openam
cp OpenAM-14.5.1.war /home/openwis/apache-tomcat-9.0.33/webapps/openam/
cd /home/openwis/apache-tomcat-9.0.33/webapps/openam
jar xvf OpenAM-14.5.1.war
⚠️ ** For OpenAM installation, a sufficient Tomcat memory is needed.
Configure memory size in Tomcat starting script. Modify JAVA_OPTS value:
export JAVA_OPTS="-server -Xms2048m -Xmx2048m"
Restart Tomcat.
Pre-requisite: Tomcat and OpenDJ are started.
Go to OpenAM Installation Page
http://<HOST_NAME>:<PORT>/openam/
- WARNING: Don’t use localhost or IP for OpenSSO URL. You have to use the fully qualified Hostname.
- WARNING: the System time between each entity must be synchronized with UTC time.
- WARNING: When configuring an installation with 2 OpenAM instances, each OpenAM instance must be installed with its own distinct hostname. The frontend will only be setup during “site configuration”.
Using HTTPS To setup OpenAM using HTTPS, configure first the Apache front-end, and perform the whole installation process via HTTPS instead of HTTP (IDP Discovery included).
Create a new custom configuration and accept the license.
Fill the password of the amAdmin user (OpenAM administrator)
Keep default values and press next
The values should be: Port=50389, Admin Port=5444 and JMX Port=1689
Select OpenDJ as data store type
Change only:
-
The port to 1389.
-
Root suffix to:
dc=opensso,dc=java,dc=net
-
Fill the password used during the OpenDJ installation.
Leave default values.
Set the default policy agent password (not used in OpenWIS). The password must be different from the amadmin one.
Summary details before installation. If it's ok, "Create Configuration"
Configuration processing...
Configuration passed.
After installation is complete:
Log in to the OpenAM console with the user name amAdmin (or amadmin).
Password is the one set during the previous step (step 1).
After installation go to :
cd ~/openam/openam/log/
and execute:
> amAuthentication.error
It appears this file is missing from the current OpenAM version and must be added manually.
Start OpenDJ control panel to check that OpenAM and OpenDJ are linked. The following steps depend on the way the installation is done either with graphical capabilities or without (command line).
cd ~/opendj/bin
./control-panel &
Click on Directory Data / Manage Entries
Check if the tree structure is equivalent to this screen.
cd ~/opendj/bin
Replace “HOST-NAME” with your server’s hostname and execute:
./ldapsearch --hostname HOST-NAME --port 1389 --baseDN "dc=opensso,dc=java,dc=net" "(objectclass=<star_symbol>)" @groupOfUniqueNames
Output:
dn: dc=opensso,dc=java,dc=net
objectClass: top
objectClass: domain
dn: ou=people,dc=opensso,dc=java,dc=net
objectClass: top
objectClass: organizationalUnit
ou: people
dn: ou=groups,dc=opensso,dc=java,dc=net
objectClass: top
objectClass: organizationalUnit
ou: groups
As openwis user install SSO Admin Tools as follows:
cd /home/openwis
mkdir ssoAdminTools
cd ssoAdminTools
Download the SSO Admin Tools from the middleware repository if available, otherwise, perform a Maven build.
Unzip the tools and run the setup:
unzip -q SSOAdminTools-14.5.1.zip
./setup
You will be prompted for:
...
Do you accept the license? yes
Path to config files of OpenAM server [/home/openwis/openam]:
Debug Directory [/home/openwis/ssoAdminTools/debug]:/home/openwis/openam/openam/debug
Log Directory [/home/openwis/ssoAdminTools/log]:/home/openwis/openam/openam/log
The scripts are properly setup under directory: /home/openwis/ssoAdminTools/openam
Debug directory is /home/openwis/openam/openam/debug.
Log directory is /home/openwis/openam/openam/log.
The version of this tools.zip is: OpenAM 14.5.1
The version of your server instance is: OpenAM 14.5.1 Build d8b8db3cac (2020-March-11 23:25)
cd /home/openwis/ssoAdminTools/openam/bin
Create a file containing the amAdmin user's password named passwd
echo "YOUR_amAdmin_PASSWORD" > passwd
Change the mode of passwd. It must be read-only by owner.
chmod 400 passwd
Reload iPlanetAMUserService:
First Delete the service iPlanetAMUserService
./ssoadm delete-svc -u amadmin -f passwd -s iPlanetAMUserService
Service was deleted.
Then, add it back again
./ssoadm create-svc -u amAdmin -f passwd --xmlfile /home/openwis/maven_projects/openwis/openwis-securityservice/openwis-securityservice-war/src/main/config/amUser.xml
Output:
Service was added.
cd /home/openwis/ssoAdminTools/openam/bin
cp /home/openwis/openwis-dependencies/security/attrs.properties .
Edit attrs.properties and set the values:
- idrepo-ldapv3-config-ldap-server=YOUR_HOSTNAME:1389
- sun-idrepo-ldapv3-config-authpw=the LDAP password for OpenDJ cn=Directory Manager
Remove deprecated entry:
- sun-idrepo-ldapv3-config-ssl-enabled
Next, update the data store configuration:
./ssoadm update-datastore -u amadmin -f passwd -e / -m OpenDJ -D attrs.properties
Datastore profile was updated.
Check OpenAM / Data Store connection
Log in to the OpenAM admin console with amadmin user:
Select the "Top Level Realm"
Go to "Data Stores" in the left menu
Select OpenDJ
The “LDAP User Attributes section” must contain the OpenWIS attributes (ex OpenWISNeedUserAccount)
The “Attribute Name for Group Membership” must contain isMemberOf
Select “Load schema when saved” and click on "Save".
If the message Information “Profile was updated” appears, the connection is OK.
Otherwise, check the configuration on this page (LDAP host, port and password)
The following sections describe how to define a Circle of Trust via OpenAM, which will include:
- The hosted Identity Provider (IdP): the local IdP provided by this OpenAM
- An optional remote IdP, when IdP federation is required
- The IdP Discovery, allowing to discover an IdP from the portal
- The service providers, which are the OpenWIS portals
Log in to the OpenAM console with "amadmin" user.
http://<HOST_NAME>:<PORT>/openam
Click on “Create SAMLv2 Providers”
Select "Create Hosted Identity Provider"
Enter IDP Name: IDP1
Select “test” as Signing key
Enter Circle of Trust name: e.g. “cot_openwis”
Click On Configure
Then click on Finish
Check at:
"Top Level Realm"
"Applications" in left menu
"SAML 2.0"
Click on Federation tab
-> Entity Providers section
IDP1 is created.
Note: At this step, configure the attribute mapper for IDP1 just created.
Go to SAML 2.0 menu and click on "Federation" tab
For each Identity Provider in the Federation tab (IDP1 in this wiki), select “Assertion Processing” tab.
Add one by one the following lines as attributes to the Attribute Mapper (the attribute names are case sensitive and there must be no spaces ) :
OpenWISAddress=OpenWISAddress
OpenWISAddressCity=OpenWISAddressCity
OpenWISAddressZip=OpenWISAddressZip
OpenWISAddressState=OpenWISAddressState
OpenWISAddressCountry=OpenWISAddressCountry
OpenWISClassOfService=OpenWISClassOfService
OpenWISBackUps=OpenWISBackUps
OpenWISProfile=OpenWISProfile
isMemberOf=isMemberOf
OpenWISNeedUserAccount=OpenWISNeedUserAccount
cn=cn
sn=sn
givenname=givenname
mail=mail
Click on "Save". A message is displayed:
Information SAMLv2 Identity Provider properties are updated
If needed (federation of Identity providers), it’s possible to register a remote identity provider.
Connect to IDP2: http://<HOST_NAME-2>:/opensso
From "Realm Overview - Common Tasks", click on “Create SAMLv2 Providers” and click on "Register Remote Identity Provider".
Select URL to locate the metadata file.
Enter the URL:
http://<HOST_NAME-1>:/opensso/saml2/jsp/exportmetadata.jsp
Add this Circle of Trust to the existing previously creating named cot.
Click on "Configure"
Then, the message "Your Identity Provider has been configured" is displayed.
Download IDPDiscovery-14.5.1.war file from the middleware repository:
wget https://github.com/OpenIdentityPlatform/OpenAM/releases/download/14.5.1/OpenAM-14.5.1.zip
⚠️ ** The default directory for OpenAM-14.5.1.zip extraction is "openam", do not overwrite your previous openam installation and specify a directory for the extracted files.
unzip OpenAM-14.5.1.zip -d extraction_directory
Copy IDPDiscovery-14.5.1.war on your OpenAM/OpenDJ server.
Deploy the file into Tomcat, as openwis user:
cd ~
cp IDPDiscovery-14.5.1.war /home/openwis/apache-tomcat-9.0.33/webapps/idpdiscovery.war
Configuration :
Display this url to configure IDP Discovery Service:
http://<IDP_DISCOVERY_HOST_NAME>:<PORT>/idpdiscovery/Configurator.jsp
Set:
- /home/openwis/idpdiscovery as debug directory
- Persistent as cookie type Click on Configure.
Log in to the OpenAM admin console with the amadmin user
http://<IDP_HOST-NAME>:<IDP_PORT>/openam
Click on "Top Level Realm"
In the left menu, click on "Applications" and "SAML 2.0"
Click on "Federation" tab.
Edit Circle Of trust by clicking on its name ex. cot_openwis
Add SAML2 reader and writer urls
SAML2 Writer Service URL:
http://IDP_DISCOVERY_HOST_NAME
:PORT
/idpdiscovery/saml2writerSAML2 Reader Service URL:
http://IDP_DISCOVERY_HOST_NAME
:PORT
/idpdiscovery/saml2reader
Note: If an Apache frontend is used to access OpenAM / IdpDiscovery, set the external host name / port configured on the front-end for these services.
Click on "Save"
The WAR of OpenWIS Security Service and the configuration file are deployed in Tomcat that contains OpenAM.
As openwis user:
cd ~
mkdir /home/openwis/apache-tomcat-9.0.33/webapps/openwis-securityservice
cd /home/openwis/apache-tomcat-9.0.33/webapps/openwis-securityservice
cp /home/openwis/maven_projects/openwis/openwis-securityservice/openwis-securityservice-war/target/openwis-securityservice.war .
jar xvf openwis-securityservice.war
Edit the configuration file located at:
vim ./WEB-INF/classes/openwis-securityservice.properties
The following parameters need to be set, replace all '@LABELS@' with your system's appropriate :
- ldap_host: set the hostname where OpenDJ has been installed
- ldap_port: keep 1389
- ldap_user: keep cn=Directory Manager
- ldap_password: set the LDAP admin password configured during OpenDJ installation
- @managementServiceServer@ : set ManagmentService server with port number(currently jboss. ex: http://openwis.server.com:8180/openwis-management-service-ejb/AlertService/AlertService?wsdl )
- @openam.baseDir@ set openam home ex: /home/openwis/openam/openam/log/amAuthentication.error
The following optional parameters may also be set:
- register_users_threshold: number of registered users beyond which an alarm is raised
- global_groups: the default Global groups (comma separated list)
- log.timer.file: the OpenAM file to check for authentication failure
- openwis.management.alertservice.wsdl: the WSDL location of the alert service
The OpenWIS Security Web Services will accessible on URL:
http://<host>:<port>/openwis-securityservice/services/<SERVICE>?wsdl
For example:
http://your_host:8080/openwis-securityservice/services/UserManagementService?wsdl
When deploying a new Centre, the LDAP must be initialized. The provided OpenWIS distribution contains a tool called “PopulateLDAP”, which can be used to perform the LDAP initialization when defining a new Centre.
Zip file is located at
~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/PopulateLDAP/target/
PopulateLDAP.zip
As openwis user install PopulateLDAP, for example on the IdP:
cd ~
mkdir PopulateLDAP
cd PopulateLDAP
cp ~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/PopulateLDAP/target/PopulateLDAP.zip .
unzip PopulateLDAP.zip
chmod a+x populateLDAP.sh
Edit script ./populateLDAP.sh to set your system's properites,
-
Update CLASSPATH to be consistent with your current version
openwis-securityservice-utils-populate-ldap-@[email protected]
you can find jar version information by executing:find ~ -name openwis-securityservice-utils-populate-ldap*
-
Replace @OPENWIS_SECURITY_HOST@:@PORT@ with your current system setting.
Run the script as user:
./populateLDAP.sh
-
Enter 1, to create a new Centre
-
Enter the Centre name (name of the deployment name: e.g.
GiscMF
).
The deployment name is the one defined in the user and admin portalsopenwis-metadataportal.properties
:
#Deployment name.
openwis.metadataportal.deploy.name=GiscMF -
Enter the Administrator login, password, email, First name, Last name
./populateLDAP.sh
Output
------ Populate LDAP ------ [1] Initialize Deployment [2] Populate LDAP [3] Reset LDAP Users [4] Reset LDAP Groups [5] Reset LDAP Users And Groups Choose one of the above values. (any other value to exit) 1 Initialize Deployment: Deployment Name: GiscMF Administrator Login: openwis Administrator Password: <admin password> Administrator Email: <Email> Administrator First name: <admin first name> Administrator Last name: <admin last name>
This script creates the initial LDAP nodes for a given deployment (groups and organizational units) to allow an OpenWIS authentication.
The created groups need to be known by the portals. An automatic synchronization is done when the portal (user or admin) starts.
So remind that the portal may need to be restarted (if already installed and running, as described in section 6.3).
Continue with the portal's configuration at chapter 6.3.3
PostgreSQL 10 (10.0 is the tested and supported version) is required and the CIText extension. PostGIS has been replaced by SOLR spatial search mechanisms.
These are provided as RPM repositories from PostgreSQL. It is recommended to use these repositories to simplify the installation process.
Select the appropriate repository for the database version you wish to use and the OS distribution you are installing the database on. The postgresql10 applicable is listed below:
PostgreSQL Version | Operation System | URL |
---|---|---|
10.0 | RHEL 7+ | https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm |
URLs of other repositories can be found at: http://yum.postgresql.org/
Install the repository using rpm. Use the URL specific to your OS (e.g. the following is for RHEL7 x86_64):
As root:
rpm -ivh https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Install the PostgreSQL packages, including (the contrib package ?) the PostGIS package:
As root:
yum install -y epel-release
yum install -y postgresql10-server postgresql10-contrib
... Complete!
Optional, install "pspg", a usefull pager for psql:
yum install -y pspg
As root:
systemctl enable postgresql-10
The process of initializing the database depends on whether you are using a custom data directory.
• If you are using a custom directory, run the initdb script as the “postgresql” user.
• Include the full path of the directory as the argument to the “-D” option:
As root:
su - postgres -c '/usr/pgsql-10/bin/initdb --data-checksums -D ~/10/data'
... Success.
Then after initdb, the data and log repertories are:
PGDATA=/var/lib/pgsql/10/data
PGLOG=/var/lib/pgsql/10/data/log
systemctl start postgresql-10
Verify that PostgreSQL is up and running:
systemctl status postgresql-10
[root@centos76 ~]# systemctl status postgresql-10
● postgresql-10.service - PostgreSQL 10 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-10.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-05-05 12:01:43 CEST; 2 days ago
Docs: https://www.postgresql.org/docs/10/static/
Process: 13311 ExecStartPre=/usr/pgsql-10/bin/postgresql-10-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 13317 (postmaster)
CGroup: /system.slice/postgresql-10.service
├─13317 /usr/pgsql-10/bin/postmaster -D /var/lib/pgsql/10/data/
├─13319 postgres: logger process
├─13321 postgres: checkpointer process
├─13322 postgres: writer process
├─13323 postgres: wal writer process
├─13324 postgres: autovacuum launcher process
├─13325 postgres: stats collector process
├─13326 postgres: bgworker: logical replication launcher
├─19055 postgres: openwis OpenWIS 127.0.0.1(36236) idle
├─19064 postgres: openwis OpenWIS 127.0.0.1(36242) idle
└─19067 postgres: openwis OpenWIS 127.0.0.1(36248) idle
May 05 12:01:42 centos76 systemd[1]: Starting PostgreSQL 10 database server...
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.444 CEST [13317] LOG: listening on IPv4 address "0.0.0.0", port 5432
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.445 CEST [13317] LOG: listening on IPv6 address "::", port 5432
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.456 CEST [13317] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.461 CEST [13317] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.688 CEST [13317] LOG: redirecting log output to logging collector process
May 05 12:01:43 centos76 postmaster[13317]: 2020-05-05 12:01:43.688 CEST [13317] HINT: Future log output will appear in directory "log".
May 05 12:01:43 centos76 systemd[1]: Started PostgreSQL 10 database server.
Note: ** For the rest of the install instructions, to log in as postres user**, login as root and then execute
su - postgres
As the “postgres” user
Create the "openwis" postgres user.
The "openwis" user should not be a super user and should not need to create any separate databases or user roles.
The "openwis" user, however, may need to create new tables or modify existing tables:
createuser -P openwis
Enter password for new role: <enter password>
Enter it again: <enter password again>
createdb -O openwis OpenWIS
$ psql -d OpenWIS -c 'create extension citext;'
CREATE EXTENSION
OpenWIS=> \dx
List of installed extensions
Name | Version | Schema | Description --------+---------+------------+-------------------------------------------------- citext | 1.4 | public | data type for case-insensitive character strings plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (2 rows)
Note: the 2nd line appears if "yum install -y pspg" has been done. See the pspg github page for examples of use : https://github.com/okbob/pspg
-
Quit OpenWIS DB:
OpenWIS=#\q
The OpenWIS distribution comes with two scripts: schema.dll and purge.sql. The schema.dll script is used to create the OpenWIS database tables used by the data services (the tables used by the user portals are created automatically on first start). Both of these scripts are located in “openwis-dependencies.zip”:
/home/openwis/openwis-dependencies/database/purge.sql
/home/openwis/openwis-dependencies/database/schema.ddl
As the “openwis” user:
- Openwis DB SCHEMA
The “schema.ddl” file from “openwis-dependencies/database” needs to be loadded into the OpenWIS database:
psql -d OpenWIS -f /home/openwis/openwis-dependencies/database/schema.ddl
- Purge script
A purge script purge.sql is provided in openwis-dependencies/database.
It allows to purge the blacklisting entries for users having no activity (request/subscription) since 30 days (default value, can be changed).
This is needed in particular because the data service is not strictly related to a local user referential, and can manage requests/subscriptions
of user belonging to remote Centre (via Remote Authentication).
The user lifecycle (creation/deletion) is then not correlated to the Data Service.
To execute the purge:
psql -d OpenWIS -f /home/openwis/openwis-dependencies/database/purge.sql
As postgres user, locate pg_hba.conf:
-
connect at postgres DB:
$ psql
psql (10.12)
Type "help" for help.
-
execute:
postgres=# show hba_file ;
hba_file ------------------------------------ /var/lib/pgsql/10/data/pg_hba.conf
(1 row)
-
Exit postgres DB console
postgres=#\q
-
Edit the pg_hba.conf to allow access to these components according to installation needs:
vim /var/lib/pgsql/10/data/pg_hba.conf
trust Allow the connection unconditionally. This method allows anyone that can connect to the PostgreSQL database server to login as any PostgreSQL user they like, without the need for a password.
reject Reject the connection unconditionally. This is useful for "filtering out" certain hosts from a group.
md5 Require the client to supply an MD5-encrypted password for authentication.
password Require the client to supply an unencrypted password for authentication. Since the password is sent in clear text over the network, this should not be used on untrusted networks.
gss Use GSSAPI to authenticate the user. This is only available for TCP/IP connections.
sspi Use SSPI to authenticate the user. This is only available on Windows.
krb5 Use Kerberos V5 to authenticate the user. This is only available for TCP/IP connections.
ident Obtain the operating system user name of the client (for TCP/IP connections by contacting the ident server on the client, for local connections by getting it from the operating system) and check if it matches the requested database user name.
ldap Authenticate using an LDAP server.
cert Authenticate using SSL client certificates.
pam Authenticate using the Pluggable Authentication Modules (PAM) service provided by the operating system.
Peer Obtain the client's operating system user name from the operating system and check if it matches the requested database user name. This is only available for local connections.
- For more info about pg_hba.conf file, see : https://www.postgresql.org/docs/10/auth-pg-hba-conf.html
-
Edit the postgresql.conf
vim /var/lib/pgsql/10/data/postgresql.conf
Set listen_addresses to the host IP address (or ‘*’) if the database needs to be accessible from outside this host.
-
restart postgresql
systemctl restart postgresql-10
The Data Service runs in Wildfly 19.1.0
Note: Openwis application could be installed in /var/opt/openwis/ directory.
In this wiki, the middlewares and Openwis components are located in /home/openwis/
As openwis:
-
Create directory for dataservice properties files
mkdir /home/openwis/conf
-
Create directory for data
mkdir /home/openwis/harness mkdir /home/openwis/harness/incoming mkdir /home/openwis/harness/ingesting mkdir /home/openwis/harness/ingesting/fromReplication mkdir /home/openwis/harness/outgoing mkdir /home/openwis/harness/working mkdir /home/openwis/harness/working/fromReplication mkdir /home/openwis/cache mkdir /home/openwis/stagingPost mkdir /home/openwis/temp mkdir /home/openwis/replication mkdir /home/openwis/replication/sending mkdir /home/openwis/replication/sending/local mkdir /home/openwis/replication/sending/destinations mkdir /home/openwis/replication/receiving mkdir /home/openwis/status
These folders correspond to:
- Incoming: folder on which incoming GTS files are dropped and consumed by OpenWIS dispatch process
- Ingesting: folder containing files to ingest (after dispatch)
- Outgoing: folder on which OpenWIS feeds files to the GTS
- Working: internal working folder
- Cache: the cache root directory
- StagingPost : the staging post root directory
- Temp : the directory for temporarily stored files
- Replication: the directory for the replication folders, with the following sub-structure:
- sending / local: contains a buffer of 24 hours data received from GTS and that needs to be sent to replication destination
- sending / destinations /
<DESTINATION_NAME>
: contains links to sending/local data that are currently sent to the given destination - receiving /
<SOURCE_NAME>
: contains files received from the replication from the given source - Status: the directory containing status files (status of each services)
Depending on the deployment, all these directories, and in particular the Cache and Staging Post directories, may correspond to mounted folders, targeting network shared locations (NFS, Samba…).
The components and config files are available for deploying after the Maven compilation described at 2.4.3
-
Copy property files in /home/openwis/conf/
cp ~/openwis-dependencies/data-management-services/openwis-dataservice-config/config/localdatasourceservice.properties ~/conf/ cp ~/openwis-dependencies/data-management-services/openwis-dataservice-config/config/openwis-dataservice.properties ~/conf/
Make appropriate changes to the configuration files located in ~/conf. The following files can be configured:
localdatasourceservice.properties:
For a GISC deployment, leave the values commented
For a DCPC, configure the LocalDataSource.
- Identification of the local data source: o Key: logical name of the data source o Value: URL of the WSDL corresponding to the Local Data Source Web Service
- Polling mechanism (to detect availability of new products) o Key:
<logical name of the data source>
.polling o Value: true or false to enable/disable the polling of the data source
openwis-dataservice.properties:
- For both GISC and DCPC:
Adapt the file "openwis-dataservice.properties" to your deployment.
This configuration section contains many parameters that can be configured. Placeholders surrounded with “@” will need to be replaced.
Most of typical deployments will have to configure only the basic ones, shown in the table below.
The following table describes the parameters
Data Service Directories Locations | description |
---|---|
dataService.baseLocation | Base location of the data services data directory (e.g. “/home/openwis”). |
cache.dir.stagingPost | The path to the staging post directory. (same as ‘staging.post.uri’ value) |
cache.dir.temp | The path to the temporary directory |
cache.dir.cache | The path to the cache root directory |
cache.dir.harness.incoming | The path to the incoming directory (folder in which GTS flow is dropped) |
cache.dir.harness.ingesting | The path to the folder containing files to ingest (after dispatch) |
cache.dir.harness.working | The path to the working directory (batch files being ingested are split in this folder, after ingesting) |
cache.dir.harness.outgoing | The path to the outgoing directory for feeding |
dataservice.service.status.folder | The path to the folder containing status files for each sub-services. |
cache.replication.config.folder | The path to the folder containing replication sub-folders (sending/receiving) |
FTP Replication Settings | |
cache.replication.config.fromReplication.folder | Name of sub-folder of ingesting and working folders where files are dropped when received from replication |
Cache Feed/Ingest Settings | |
cache.config.numberOfChecksumBytes | The number of bytes of a file on which the checksum will be calculated |
cache.config.location.sendingCentre | The 4-letter-location-code of the sending (the local) centre (used as a prefix of outgoing WMO FTP files). |
Dissemination Configuration | |
staging.post.uri | The path to the staging post directory. |
cache.dissemination.stagingPost.purgeTime | [minutes] The entries in the “StagingPostEntry” database table and the files on the staging post, referenced in those entries, which are older than this value will be deleted. |
cache.config.stagingPostMaximumSize | The maximum number of bytes in the staging post. |
cache.config.cacheMaximumSize | The maximum number of bytes in the cache. |
cache.dissemination.disseminationHarness.public.url | The URL of the public dissemination harness. |
cache.dissemination.disseminationHarness.rmdcn.url | The URL of the RMDCN dissemination harness. |
cache.dissemination.threshold.mail | The maximum number of bytes to send via email |
cache.dissemination.threshold.ftp | The maximum number of bytes to send via ftp |
mail.from Sender | of the blacklisting emailing |
mail.smtp.host SMTP | Host for the blacklisting emailing |
mail.smtp.port SMTP | Port for the blacklisting emailing |
blacklist.default.nb.warn Default | blacklisting warning threshold regarding the number of files |
blacklist.default.nb.blacklist | Default blacklisting threshold regarding the number of files |
blacklist.default.vol.warn | Default blacklisting warning threshold regarding the volume |
blacklist.default.vol.blacklist | Default blacklisting threshold for the volume |
Management WebService URLs | |
openwis.management.alertservice.wsdl | The WSDL URL of the alert service. (update Host/Port with the location of the Management service) |
openwis.management.controlservice.wsdl | The WSDL URL of the control service. (update Host/Port with the location of the Management service) |
openwis.management.controlservice.defaultFeedingFilterLocation | The full path and name (e.g. ‘defaultFeedingFilters.config’) of a file containing the default feeding filters. The content of this file must be one regular expression per line, each line thus defines a filter. |
openwis.management.disseminateddatastatistics.wsdl | The WSDL URL of the disseminated data statistics. (update Host/Port with the location of the Management service) |
openwis.management.exchangeddatastatistics.wsdl | The WSDL URL of the exchanged data statistics. (update Host/Port with the location of the Management service) |
openwis.management.replicateddatastatistics.wsdl | The WSDL URL of the replicated data statistics. (update Host/Port with the location of the Management service) |
openwis.management.ingesteddatastatistics.wsdl | The WSDL URL of the ingested data statistics. (update Host/Port with the location of the Management service) |
Other Settings | |
cache.cacheManager.housekeepingTimer.expirationWindow | [days] The number of days the cache content will be valid before it gets removed |
cache.cacheManager.purgingTimer.expirationWindow | [ms] The number of milliseconds a file will be kept in temporary directory. |
Table 1: Data Service configuration properties
Note: Advanced parameters should not to be modified on a typical deployment.
-
Download JDBC Driver for PosgreSQL 10 and java 1.8
wget https://jdbc.postgresql.org/download/postgresql-42.2.6.jar
-
JBOSS_HOME - Check that JBOSS_HOME is properly configured:
$ echo $JBOSS_HOME
/home/openwis/wildfly-19.1.0Final
It's convenient to install the Openwis Data&Management service using jboss-cli.
Assuming that openwis is builded as described in 2.6 and that JBOSS_HOME is correctly configured, execute a script with the following commands:
Note: OpenWIS DB password is defined at 4.2 section
Note:
Wildfly must be started
As openwis user:
#!/bin/bash
#
echo "---Deploy Openwis Data and Management Services via CLI Tools---"
echo ""
echo "-- JBOSS_HOME setting check";
if [ "x$JBOSS_HOME" = "x" ]; then
echo "JBOSS_HOME undefined -> exit !";
exit;
else
echo "JBOSS_HOME ok : $JBOSS_HOME";
fi
echo "--Set the HTTP port to 8180"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/socket-binding-group="standard-sockets"/socket-binding="http":write-attribute(name="port",value=8180)"
sleep 5
echo "--Set the Management Inet Listen Address"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/interface=management:write-attribute(name=inet-address,value=127.0.0.1)"
echo "--Set the Public Inet Listen Address"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/interface=public:write-attribute(name=inet-address,value=0.0.0.0)"
echo "--Set the WSDL Inet Listen Address"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=webservices:write-attribute(name="wsdl-host",value=127.0.0.1)"
echo "--Set the WSDL port to 8180"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=webservices:write-attribute(name="wsdl-port",value=8180)"
sleep 5
echo "--Configure Deployment Scanner scan-interval"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=deployment-scanner/scanner=default:write-attribute(name=scan-interval,value=500)"
echo "--Configure Deployment Scanner auto-deploy true"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-exploded,value="true")"
echo "--Reload WildFly..."
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command=":reload"
sleep 5
echo "--Setup logging"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler="CollectionHandler":add(level=INFO, file={"relative-to"=>"jboss.server.log.dir", "path"=>"collection.log"}, formatter="%d %-5p [%c] %m%n", append=true, autoflush=true, suffix="yyyy-MM-dd")'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler="RequestHandler":add(level=INFO, file={"relative-to"=>"jboss.server.log.dir", "path"=>"request.log"}, formatter="%d %-5p [%c] %m%n", append=true, autoflush=true, suffix="yyyy-MM-dd")'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler="AlertsHandler":add(level=INFO, file={"relative-to"=>"jboss.server.log.dir", "path"=>"alerts.log"}, formatter="%d %-5p [%c] %m%n", append=true, autoflush=true, suffix="yyyy-MM-dd")'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.dataservice.util.WMOFTP:add(use-parent-handlers=true,handlers=["CollectionHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.dataservice.gts.collection:add(use-parent-handlers=true,handlers=["CollectionHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.dataservice.dissemination:add(use-parent-handlers=true,handlers=["RequestHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.datasource:add(use-parent-handlers=true,handlers=["RequestHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/logger=org.openwis.management.service:add(use-parent-handlers=true,handlers=["AlertsHandler"])'
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command='/subsystem=logging/periodic-rotating-file-handler=FILE:write-attribute(name="level",value="INFO")'
sleep 5
echo "--Reload WildFly..."
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command=':reload'
sleep 5
echo "--Deploy the postgresql driver"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command='deploy postgresql-42.2.6.jar'
echo "--Setup the OpenDS data source"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command=' data-source add --name=OpenwisDS --jndi-name="java:/OpenwisDS" --connection-url="jdbc:postgresql://localhost:5432/OpenWIS?stringtype=unspecified" \
--user-name="openwis" --password="$openwis" --driver-name="postgresql-42.2.6.jar" --driver-class="org.postgresql.Driver" \
--min-pool-size=10 --max-pool-size=40 --idle-timeout-minutes=15 --blocking-timeout-wait-millis=15000 --background-validation-millis=50000'
sleep 10
echo "--Setup the JMS queues"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=CollectionQueue --entries=[java:/jms/queue/CollectionQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=IncomingDataQueue --entries=[java:/jms/queue/IncomingDataQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=RequestQueue --entries=[java:/jms/queue/RequestQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=DisseminationQueue --entries=[java:/jms/queue/DisseminationQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=PackedFeedingQueue --entries=[java:/jms/queue/PackedFeedingQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=UnpackedFeedingQueue --entries=[java:/jms/queue/UnpackedFeedingQueue]"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="jms-queue add --queue-address=StatisticsQueue --entries=[java:/jms/queue/StatisticsQueue]"
sleep 10
echo "--Reload WildFly..."
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 --command=":reload"
sleep 10
echo "--Deploying Management Service ear file"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="deploy /home/openwis/maven_projects/openwis/openwis-management/openwis-management-service/openwis-management-service-ear/target/openwis-management-service.ear"
sleep 10
echo "--Deploying Data Service ear file"
$JBOSS_HOME/bin/jboss-cli.sh --connect --controller=127.0.0.1:9990 \
--command="deploy /home/openwis/maven_projects/openwis/openwis-dataservice/openwis-dataservice-server/openwis-dataservice-server-ear/target/openwis-dataservice.ear"
sleep 10
echo "--checking deployments"
$JBOSS_HOME/bin/jboss-cli.sh --controller="localhost:9990" -c --commands=ls\ deployment
sleep 5
echo ""
echo "*** Installation COMPLETE ***"
-
Verify that external servers like user and admin portals can reach Wildfly:
vim ~/wildfly-19.1.0.Final/standalone/configuration/standalone-full.xml
Interfaces section:
<interfaces>
<interface name="management">
<inet-address value="127.0.0.1"/>
</interface>
<interface name="public">
<inet-address value="0.0.0.0"/>
</interface>
<interface name="unsecure">
inet-address value="${jboss.bind.address.unsecure:0.0.0.0}"/>
</interface>
</interfaces>
-
Verify that Wildfly is running port 8180:
$ netstat -an | grep LISTEN|grep -v LISTENING tcp 0 0 127.0.0.1:9990 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3528 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8180 0.0.0.0:* LISTEN
-
Verify that OpenWIS DataService is working:
tail -f $JBOSS_HOME/standalone/log/server.log
...
2020-06-08 16:06:15,884 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 8) +++ purging temporary directory
2020-06-08 16:06:15,885 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ Start housekeeping of cache since Sun Jun 07 16:06:15 CEST 2020
2020-06-08 16:06:15,885 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ expiration +++ deleting mapped metadata
2020-06-08 16:06:15,886 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ expiration +++ deleting cached files
2020-06-08 16:06:15,887 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ expiration +++ deleted : 0
2020-06-08 16:06:15,888 INFO [org.openwis.management.service.IngestedDataStatisticsImpl] (EJB default - 7) No ingested data found at Mon Jun 08 02:00:00 CEST 2020
2020-06-08 16:06:15,888 INFO [org.openwis.dataservice.cache.CacheManagerImpl] (EJB default - 7) +++ Finished housekeeping of cache since Sun Jun 07 16:06:15 CEST 2020
...
Download Solr:
wget https://archive.apache.org/dist/lucene/solr/8.5.2/solr-8.5.2.tgz
Apache Solr 8.5.2 installation:
As root
wget https://mirrors.estointernet.in/apache/lucene/solr/8.5.2/solr-8.5.2.tgz
tar xzf solr-8.5.2.tgz solr-8.5.2/bin/install_solr_service.sh --strip-components=2
./install_solr_service.sh solr-8.5.2.tgz
Checking the status of the solr service:
service solr status
Apache solr web admin console:
http://[solr_server]:8983/solr
Download the conf file: https://drive.google.com/file/d/1LSOmOIElQffT6NxqGa0Yk6VLhVRZIAHZ/view?usp=sharing
To add solr core, execute the following commands as solr user, not as root user.
unzip solr_install_resources.zip
tar -xzf openwis_solr_conf.tar.gz
If needed:
chown -R solr:solr solr_install_resources/
core creation and configuration:
<solr_home>/bin/solr create_core -c <core_name> -d <configFolder> -V
Example:
sudo -i -u solr
/opt/solr/bin/solr create_core -c core1 -d openwis_solr_conf -V
If switching to solr user fails, use the following command:
runuser -l solr -c '/opt/solr/bin/solr create_core -c core1 -d openwis_solr_conf -V'
Then enter the solr url in admin and user openwis-metadataportal.properties file and restart tomcat for both user and admin portal:
...
# Solr URL
openwis.metadataportal.solr.url=http://<solr_server>:8983/solr/core1
openwis.metadataportal.graphicOverview.enable=false
Solr installation is done!
The Staging Post is a file system accessible via a Web interface. This can be configured using the provided StagingPost web application deployed on a Tomcat server or alternatively deployed on JBoss. As mentioned in data service section, the Staging Post file system is configured to be in /home/openwis/stagingPost.
As openwis: Deployment of the Staging Post web app:
mkdir /home/openwis/stagingPost
cd /home/openwis/stagingPost
cp ~/maven_projects/openwis/openwis-stagingpost/target/stagingPost.war .
jar xvf stagingPost.war
In case the Staging Post is deployed elsewhere, adjust the Staging Post globalXsltFile locations in WEB-INF/web.xml.
To deploy the stagingPost on Tomcat: Edit the server.xml file of Tomcat:
vi ~/apache-tomcat-9.0.36/conf/server.xml
Add the following element at the end of the Host element:
<Context docBase="/home/openwis/stagingPost" path="/stagingPost" reloadable="false"/>
Note: Admin Portal, User portal and OpenAM cannot be in the same Tomcat instance neither run under the same OS user.
Deployment of the admin portal:
As a new user (ex openwisAdmin):
mkdir apache-tomcat-9.0.36/webapps/openwis-admin-portal
cd apache-tomcat-9.0.36/webapps/openwis-admin-portal
Find openwis-admin-portal.war or openwis-admin-portal-admin.war (in later versions of openwis) and copy it at apache-tomcat-9.0.36/webapps/openwis-admin-portal.
cp ~/maven_projects/openwis/openwis-metadataportal/openwis-portal/openwis-admin-portal/openwis-admin-portal-admin.war .
jar xvf openwis-admin-portal-admin.war
Likewise deployment of the user portal:
Also as new user (ex openwisUser):
mkdir apache-tomcat-9.0.36/webapps/openwis-user-portal
cd apache-tomcat-9.0.36/webapps/openwis-user-portal
cp ~/maven_projects/openwis/openwis-metadataportal/openwis-portal/openwis-user-portal/openwis-user-portal-user.war .
jar xvf openwis-user-portal-user.war
The configuration is based on the same file/parameters on user and admin portals.
⚠️ This configuration must be done for both deployments: User Portal and also for Admin Portal
Edit as your defined: portal admin for Admin Portal
And as portal user for User Portal
-
config.xml: database connection (Geonetwork)
vi WEB-INF/config.xml
Edit at section postgresql, set the user, password and URL used to connect the PostgreSQL instance.
-
openwis-metadataportal.properties: OpenWIS specific configuation
vi WEB-INF/classes/openwis-metadataportal.properties
Property | Description |
---|---|
openwis.metadataportal.dataservice.*.wsdl | WSDL location of the Data Services |
openwis.metadataportal.securityservice.*.wsdl | WSDL location of the Security Web Services Update with SecurityService host/port |
openwis.metadataportal.harness.subselectionparameters.wsdl | WSDL location of the SubselectionParameters Harness Web Service |
openwis.metadataportal.harness.mssfss.wsdl | WSDL location of the MSS/FSS Harness Web Service |
openwis.management.*.wsdl | WSDL location of the Security Web Services,Update with ManagementService host/port |
openwis.metadataportal.mssfss.support | Whether the MSS/FSS is supported by the current deployment |
openwis.metadataportal.url.staging.post | Base URL of the Staging Post |
openwis.metadataportal.cache.enable | Whether the current deployment has a Cache facility |
openwis.metadataportal.solr.url | SolR location: http://server:port/openwis-portal-solr |
openwis.metadataportal.date.format | Date format (test only) |
openwis.metadataportal.datetime.format | Date time format (test only) |
openwis.metadataportal.deploy.name | Deployment name as defined with Populate LDAP described in section 3.5(e.g. GiscMeteoFrance) |
openwis.metadataportal.datapolicy.default.name | Default data policy that will be applied to newly created groups |
openwis.metadataportal.datapolicy.default.operations | Default data policy operation that will be applied to newly created groups |
openwis.metadataportal.sso | OpenSSO URL (used only on Administration portal) |
openwis.metadataportal.oai.maxRecords | Max records processed by OAIPMH in one page |
openwis.metadataportal.acceptedFileExtensions | Accepted list of file extensions (deduced from the metadata and used during file unpacking) |
openwis.metadataportal.monitoring.userportal.url | URL of user portal used to test availability (used on admin portal) |
openwis.metadataportal.monitoring.synchro.warn.limit | Percentage beyond which the availability of synchronization process will be marked in error |
openwis.metadataportal.monitoring.harvesting.warn.limit | Percentage beyond which the availability of harvesting process will be marked in error |
openwis.metadataportal.session.securityservice.tooManyActiveUsers | Number of connected users beyond which an alarm is raised |
openwis.metadataportal.securityservice.tooManyActiveAnonymousUsers | Number of anonymous users beyond which an alarm is raised |
openwis.metadataportal.extract.xpath | XPath of the GTS Category / Data Policy in ISO19139 schema |
openwis.metadataportal.extract.gtsCategoryAdditionalRegexp | Regular expression to interpret a GTS Category found in metadata as WMOAdditional |
openwis.metadataportal.extract.gtsCategoryEssentialRegexp | Regular expression to interpret a GTS Category found in metadata as WMOEssential |
openwis.metadataportal.extract.gtsPriorityRegexp | Regular expression to interpret a GTS Priority found in metadata |
openwis.metadataportal.extract.urnPatternForIgnoredFncPattern | The pattern applied on URN to determine if FNC Pattern found in metadata should be ignored when inserting the metadata.Default ignored FNC Pattern when URN is TTAAiiCCCC based) |
openwis.metadataportal.catalogsize.alarm.period | Catalog |
openwis.metadataportal.catalogsize.alarm.limit | Catalog size limit beyond which raise an alarm |
openwis.metadataportal.lang.list | Available languages: list of <lang_value/lang_label> |
openwis.metadataportal.report.file.path | Directory where harvesting reports are stored on the admin portal server |
- openwis-deployments.properties: Multi-deployments configuration
vi WEB-INF/classes/openwis-deployments.properties
Property | Description |
---|---|
openwis.cots | List of deployment names that can be used to consult the remote requests/subscription.These deployments must be in the same circle of trust. |
openwis.backups | List of backup deployment names |
openwis.deployment.url.DEPLOYMENT_NAME | The User portal URL of the backup deployment (for each backup deployment, including the current one) |
openwis.deployment.url.DEPLOYMENT_NAME.admin | The administrator email of the backup deployment (for each backup deployment, including the current one) that will be notified in case of availability error detection |
openwis.backup.warn.rate | The rate of available function (in %) bellow which we consider that |
The user and admin portals use Google recaptcha V2: https://www.google.com/recaptcha/about/
For this use, we need:
-
To create a Site
-
To add the used domain to the domains list
-
To generate the secretKey and siteKey and enter them in the user and admin portals openwis-metadataportal.properties files.
Google recaptcha
openwis.metadataportal.login.captcha.secretKey=xxx
openwis.metadataportal.login.captcha.siteKey=xxx
Note: For a DEV deployment enter these following values in the openwis-metadataportal.properties files:
SecretKey: 6LeIxAcTAAAAAGG-vFI1TnRWxMZNFuojJ4WifJWe SiteKey: 6LeIxAcTAAAAAJcZVRqyHh71UMIEGNQ_MXjiZKhI
Restart tomcat for both user and admin portals to take the changes into account.
Each OpenWIS portal (User or Admin) will be a Service Provider (SP), and will need to be configured before being registered in the Circle of Trust defined previously. This phase requires the OpenWIS portal to be deployed.
GenerateSPConfFiles is a tool provided to ease the configuration process of a Service Provider.
GenerateSPConfFiles.zip file is located at:
~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/GenerateSPConfFiles/target/GenerateSPConfFiles.zip
If zip file is not present, build GenerateSPConfFiles.zip:
cd ~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/GenerateSPConfFiles
chmod a+x build/Generate-SP-Conf-Files.sh apache-ant-1.8.1/bin/ant
cd build
./Generate-SP-Conf-Files.sh
Output :
...
zip-content:
[zip] Building zip: /home/openwis/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/GenerateSPConfFiles/target/GenerateSPConfFiles.zip
all:
BUILD SUCCESSFUL
Total time: 1 second
Unzip GenerateSPConfFiles tool
Assuming that /home/openwis/ is the PORTAL USER or PORTAL ADMIN HOME:
cd /home/openwis/
mkdir GenerateSPConfFiles
cd GenerateSPConfFiles
unzip ~/maven_projects/openwis/openwis-securityservice/openwis-securityservice-utils/GenerateSPConfFiles/target/GenerateSPConfFiles.zip
Edit configuration.properties in conf directory:
#Home Directory (Windows : c:/Documents and Settings/<USER-NAME>, Other /home/<USER-NAME>)
directory=<PORTAL USER or PORTAL ADMIN HOME>
#Circle Of Trust Name
cot-name =cot_openwis # as defined in step 3.3.1
# Service Provider Name
sp-name =GiscA # logical name of the portal: GiscMF, GiscMFAdmin; name to identify the Service Provider in OpenAM
# Service Provider URL (example: http://<HOST-NAME>:<PORT>/<SP-NAME>)
sp-url =http://<external-sp-hostname>/openwis-user-portal (for user portal)
# IDP Discovery URL (example: http://<HOST-NAME>:<PORT>/idpdiscovery)
idp-discovery=http://<external-idpdiscovery-hostname>/idpdiscovery
# IDPs and SPs of the circle of trust. (comma separated)
trusted-providers =IDP1 # at least the IdP, as defined in step 3.3.1
#All IDPS names
idps-names =IDP1 # at least the IdP, as defined in step 3.3.1
#For each IDP name, create a variable <IDP-NAME>.url=http://<HOST-NAME>:<PORT>/openam
#<IDP-NAME>.url=
IDP1.url=http://<external-idp-hostname>/openam
# at least the IdP, as defined in step **3.3.1**
Run GenerateSPConfFiles
cd ~/GenerateSPConfFiles/apache-ant-1.8.1/bin
chmod a+x *
cd ~/GenerateSPConfFiles
chmod a+x Generate-SP-Conf-Files.sh apache-ant-1.8.1/bin/ant
./Generate-SP-Conf-Files.sh
The tool generates files in /fedlet, which allows the portals (service provider) to manage SAML2 communications with the other entities:
Generate-SP-Conf-Files.sh
more than once . You must manually delete all files contained in fedlet folder. Script wont replace them, thus re-execution wont have any effect.
After each fedlet making, restart Tomcat.
⚠️ If any Fedlet error occurs, see Fedlet log files located at
/fedlet/debug
Restart Portal server
and then verify via a browser:
For admin Portal:
<Admin-PORTAL-HOST-NAME>:<PORT>/openwis-admin-portal/saml2/jsp/exportmetadata.jsp
For user Portal:
<User-PORTAL-HOST-NAME>:<PORT>/openwis-user-portal/saml2/jsp/exportmetadata.jsp
Example:
In order to add Portals at COT they must be registered as remote Service Provider on OpenAM.
Log in to the OpenAM admin console with the amadmin user
Click on “Top Level Realm” button.
In "Common Tasks", click on "Configure SAML2 Providers"
Select “Configure Remote Service Provider”.
In the field "URL where metadata is located", register the URL:
http://<SP_HOST_NAME>:<PORT>/<SP_NAME>/saml2/jsp/exportmetadata.jsp
This URL is the URL of the User or Administration portal to register.
For example: http://vmows2.demo.org/openwis-user-portal/saml2/jsp/exportmetadata.jsp
The portal must be running to complete this step.
Select if needed the circle of trust previously created named "cot_openwis"
Click on "Configure".
SP configuration completed.
After installation is complete portals must be accesible via :
http://<USER_PORTAL_SERVER>:<PORT>/openwis-user-portal
http://<ADMIN_PORTAL_SERVER>:<PORT>/openwis-admin-portal
At login prompt through OpenAM, the user must enter the credentials of the user(s) created via populateLDAP (see 3.5)
Login is OK.
If any error occurs, information may be found at:
-
Admin or User Tomcat logs
⚠️ In ~/apache-tomcat-9.0.24/logs/localhost.YYYY-MM-dd.log, if the following error arises:
... SEVERE [http-nio-8080-exec-1] org.apache.catalina.core.StandardWrapperValve.invoke Servlet.service() for servlet [openWisInit] in context with path [/openwis-user-portal] threw exception [Servlet execution threw an exception] with root cause java.lang.ClassNotFoundException: com.sun.identity.security.cert.CRLValidator
Then apply workaround described in issues#348
-
OpenWIS portals log files located at:
/home/<Admin or User>/logs
-
Tomcat logs (OpenDJ, OpenAM, OpenWIS Security Services)
-
JBoss logs (openwis-dataservice, openwis-management-service)
An Apache server is used as a front-end to access:
- the user and admin portals
- OpenAM
- OpenWIS Security Services (load balancing/failover)
- Data Service (load balancing/failover)
- the Staging Post
Apache is already installed by default in RedHat 5.5 +.
The configuration of Apache is done via the configuration file: /etc/httpd/conf/httpd.conf.
See details of configuration at next paragraphs
The relevant section is:
Start Apache Web Server:
service httpd start
service httpd status
Notes:
Enable SELinux working with Apache Proxy As root: setsebool
httpd_can_network_connect on
if getsebool: SELinux is disabled
Edit :/etc/sysconfig/selinux
and set SELINUX=disabled -> SELINUX=permissive
Reboot OS
To test the value
getsebool httpd_can_network_connect
httpd_can_network_connect --> on
Warning:
- If failover and site are not configured : the host name of the proxied OpenAM needs to be exactly the same as used during OpenAM installation (hostname and not IP address) or the whole OpenAM installation needs to be done via the front-end host name.
- In failover and site configuration: the frontend hostname will be the frontend of the configured site.
For a simple front-end configuration:
ProxyPreserveHost On
# Proxy for OpenAM
ProxyPass /opensso http://<HOST_OPENAM>:8080/openam
ProxyPassReverse /opensso http://<HOST_OPENAM>:8080/openam
For a failover configuration with load balancing between 2 OpenAM instances:
ProxyRequests Off
# Load balancer for OpenAM
<Proxy balancer://cluster-opensso>
Order deny,allow
Allow from all
BalancerMember http://<HOST_OPENAM_1>:8080/opensso route=node1
BalancerMember http://<HOST_OPENAM_2>:8080/opensso route=node2
ProxySet lbmethod=byrequests
ProxySet stickysession= APLBCOOKIE
</Proxy>
Header add Set-Cookie "APLBCOOKIE=APACHE.%{BALANCER_WORKER_ROUTE}e;path=/;" env=BALANCER_ROUTE_CHANGED ProxyPass /opensso balancer://cluster-opensso ProxyPassReverse /opensso http://<HOST_OPENAM_1>:8080/openam ProxyPassReverse /opensso http://<HOST_OPENAM_2>:8080/openam
For a simple front-end configuration:
# Proxy for IDP Discovery
ProxyPass /idpdiscovery http://<HOST_IDP_DISCOVERY>:8080/idpdiscovery
ProxyPassReverse /idpdiscovery http://<HOST_IDP_DISCOVERY>:8080/idpdiscovery
For a failover configuration with load balancing between 2 IdpDiscovery instances:
# IDP Discovery
<Proxy balancer://cluster-idpdiscovery>
Order deny,allow
Allow from all
BalancerMember http://<HOST_IDPDISCOVERY_1>:8080/idpdiscovery route=node1
BalancerMember http://<HOST_IDPDISCOVERY_2>:8080/idpdiscovery route=node2
</Proxy>
ProxyPass /idpdiscovery balancer://cluster-idpdiscovery lbmethod=byrequests
ProxyPassReverse /idpdiscovery http://<HOST_IDPDISCOVERY_1>:8080/idpdiscovery
ProxyPassReverse /idpdiscovery http://<HOST_IDPDISCOVERY_2>:8080/idpdiscovery
When configuring failover for OpenWIS Security Web Services, the Apache frontend is used as load balancer between the 2 Tomcat instances (used for OpenAM). The Apache configuration is as followed:
# Security service Loadbalancer
<Proxy balancer://cluster-securityservice>
Order deny,allow
Allow from all
BalancerMember http://<HOST_OPENAM_1>:8080/openwis-securityservice route=node1
BalancerMember http://<HOST_OPENAM_2>:8080/openwis-securityservice route=node2
</Proxy>
ProxyPass /openwis-securityservice balancer://cluster-securityservice lbmethod=byrequests
ProxyPassReverse /openwis-securityservice http://<HOST_OPENAM_1>:8080/openwis-securityservice
ProxyPassReverse /openwis-securityservice http://<HOST_OPENAM_2>:8080/openwis-securityservice
ProxyPreserveHost On
# Proxy for User portal
ProxyPass /openwis-user-portal http://<HOST_USER_PORTAL>:8080/openwis-user-portal
ProxyPassReverse /openwis-user-portal http://<HOST_USER_PORTAL>:8080/openwis-user-portal
# Proxy for Admin portal (if admin portal is available on this zone)
ProxyPass /openwis-admin-portal http://<HOST_ADMIN_PORTAL>:8080/openwis-admin-portal
ProxyPassReverse /openwis-admin-portal http://<HOST_ADMIN_PORTAL>:8080/openwis-admin-portal
Replace HOST_USER_PORTAL and HOST_ADMIN_PORTAL by the virtual address used for the Active/Passive configuration.
Redirect / It’s generally useful to be able to access the application without having to precise the name of the web application and just give the hostname; for example: using http://wispi.meteo.fr instead of http://wispi.meteo.fr/openwis-user-portal to access the user portal. This can be configured with the following Redirect directive: Redirect / http://<HOST_USER_PORTAL:PORT>/openwis-user-portal
Or similarly for Admin portal:
Redirect / http://<HOST_ADMIN_PORTAL:PORT>/openwis-admin-portal
ProxyPass /stagingPost http://<HOST_STAGINGPOST_WEBAPP>:8080/stagingPost
ProxyPassReverse /stagingPost http://<HOST_STAGINGPOST_WEBAPP>:8080/stagingPost
[OpenWIS Association] (http://openwis.github.io/openwis-documentation/)
[OpenWIS repository] (https://github.com/OpenWIS)
[IG-OpenWIS-1-v02 9-MigrationOpenAM-v3] (https://github.com/OpenWIS/openwis/blob/master/docs/IG-OpenWIS-1-v02%209-MigrationOpenAM-v3.doc)