Directory Structure of IOM Projects - intershop/iom-project-archetype GitHub Wiki

Introduction

IOM projects need a standardized project layout to be able to extend the standard IOM Docker images in a generic way. IOM projects, derived from IOM Project Archetype, follow the rules described in this document.

SQL Configuration

Overview

SQL configuration of projects is always a mixture of configuration of IOM standard features and project-specific database artifacts.

SQL scripts are used to:

  • Create new database objects including stored procedures.
  • Fill in some core tables with required data.
  • Fill in business configuration tables with data.
  • Possibly duplicate custom enumeration values definition from Java to the database to handle a chicken-egg problem.
  • Modify existing objects and data during version upgrades.

The database objects and data required by the IOM core product and their possible modifications are provided by the standard IOM product in the form of an initial dump and SQL scripts. These core SQL scripts are always performed prior to the project-specific scripts, during project setup, or version upgrades.

Table definitions and database functions provided for configuration of IOM can be considered as APIs that may change between core IOM Versions. The project-specific SQL scripts may rely on these APIs and must therefore be kept up to date during IOM version upgrades.

Project-Specific SQL Scripts

Database initialization and configuration of project-specific customizations must be implemented with a set of SQL scripts organized in a given directory structure (see below). These scripts will always be performed after the scripts of the IOM core product.

Note: The project-specific database objects are not created during DB-initialization of the product. These object definition will also not be migrated by the IOM standard product. The project has to care for migration by itself.

It is essential that SQL scripts follow these rules:

  • All scripts must be idempotent: the SQL code must produce exactly the same result when running again.
  • The scripts may contain tests to check if a data migration step has been already performed in order to avoid repeated data updates, which can be time consuming and generate unnecessary table- and index-bloat.

Directory Structure

The sql-config directory is split into three main sub-directories.

src/sql-config/
├── dbinit/
│   ├── 001_....sql
│   └── <N>_*.sql
├── dbmigrate/
│   ├── 001_....sql
│   └── <N>_*.sql
└── config/
    ├── base/
    │   ├── 001_....sql
    │   ├── 003_....sql
    │   └── <N>_*.sql
    └── env/
        ├── <env-name 1>/
        │   ├── 002_....sql
        │   └── <N>_*.sql
        ├── <env-name 2>/
        │    └── ...
        └── ...

dbinit

  • SQL scripts to create database objects required by project-specific customizations.
  • SQL scripts to fill in tables with initial data.

dbmigrate

  • SQL scripts to modify project-specific database objects.
  • SQL scripts to modify or delete some project data (except configurations).

Note: The two folders dbinit and dbmigrate are meant to organize the scripts according to their content. Both of them will be always executed, so you can decide to use only one of them.

config

  • SQL scripts to fill in project-specific tables created before.
  • SQL scripts to fill in standard (core) tables with the project business configuration.
  • Provides support for environment-specific configurations (e.g. production, integration test, development, etc.). Scripts in base sub-directory are always executed, whereas scripts located in one of the env sub-directories are executed only when the environment setting of the current installation matches the directory name found in the env sub-directory.

Order of Execution

The content of dbinit, dbmigrate, and config are executed one after another.

All SQL scripts must have a numerical prefix. This prefix defines the order of execution within one of the three main folders starting with the lowest number. Within config, only the numerical prefix is used to determine the execution order. The location within the sub folders base or env/ does not matter.

Error Handling

Each script will be processed by a separate call to the Postgres SQL client psql. In case of an error, the container (IOM project-image) will fail and Kubernetes will restart the pod until the initialization succeeds or the timeout is passed. Only the changes performed by the current script will be rolled back. Modifications done by the previous scripts remain.

Synchronization of Java Enums

Most IOM Java Enums (or IOM Specific "extensible Enum") exist as database tables, too, and many SQL scripts need to refer to some of them. There is a process during the application startup that takes care of the synchronization.

This causes a chicken or the egg dilemma as the SQL scripts must run prior to the application start during setup or upgrade processes. To solve it, such new Enum values have to be added by the script as well, either in the folder dbinit or dbmigrate to register them within the database prior to the application start and make them available for following SQL scripts.

Runtime Scope

During runtime, the SQL configuration to be executed is selected by the Helm parameter project.envName, see IOM Helm Charts.

Development Scope

IOM Development Environment (devenv-4-iom) provides a process which is able to execute sql-scripts from a directory matching the structure described above. For testing of single aspects of SQL configuration it is also possible to execute single SQL scripts.

Devenv-4-iom provides the configuration variable PROJECT_ENV_NAME to define the SQL configuration to be executed.

Mail Templates

Overview

Mails can be customized for projects by adding new mail templates or by overwriting existing ones.

Directory Structure

src/mail-templates/
├── mails_customers/
│   └── ...
└── mails_operations/
    └── ...

Mail templates belonging to a project have to be placed into the directory src/mail-templates.

All sub-directories and files located in mail-templates will be copied to the directory $OMS_VAR/templates within the IOM project image. Therefore, you have to use the directory structure as used in $OMS_VAR/templates too. For more information, see sections Default E-mail templates and custom templates in Reference - IOM Customer Emails. Also see Concept - IOM Customer E-Mails.

There is no support for environment specific mail templates.

Development Scope

When using the IOM Development Environment (devenv-4-iom), a script is provided, which is able to roll out custom mail templates into the development environment.

XSL Templates

Overview

Documents can be customized for projects by adding new XSL templates or by overwriting existing ones.

Directory Structure

src/xsl-templates/
├── configuration/
│   └── ...
├── shop_default/
│   └── ...
├── utils/
│   └── ...
└── ...

XSL templates belonging to a project have to be placed into directory xsl-templates within the project sources.

All sub-directories and files located in xsl-templates will be copied to directory $OMS_VAR/xslt of the IOM project-image. Therefore, you have to use the directory structure as used in $OMS_VAR/xslt too. For more information, see Reference - IOM Customer E-mails.

There is no support for environment-specific xsl templates.

Development Scope

When using IOM Development Environment (devenv-4-iom), a script is provided, which rolls out custom XSL templates into a running IOM development system.

Custom Properties

Overview

cluster.properties

cluster.properties contains properties that can be controlled by the IOM project. The complete list of properties can be found in Reference - IOM Properties.

cluster.properties does not support environment specific settings, it can only be defined globally within the base settings.

project.cluster.properties

There are other configurations that are mainly used to control the behavior of the Wildfly application server. These are located in the system.std.cluster.property file of the IOM product. The project might bring an according project.cluster.property file, which is applied additionally to the system.std.cluster.property file. This enables projects to overwrite settings made in system.std.cluster.properties or to add new settings. It is impossible to overwrite any property that is defined in cluster.properties.

project.cluster.properties does not support environment specific settings, it can only be defined globally within the base settings.

quartz-jobs-custom.xml

It is possible to delete standard IOM jobs, overwrite standard IOM jobs, and to add custom jobs by defining them in quartz-jobs-custom.xml. IOMs standard jobs are defined in quartz-jobs-cluster.xml. During runtime of IOM, both configuration files are loaded, first quartz-jobs-cluster.xml and then quartz-jobs-custom.xml. This makes it possible to delete and overwrite IOM standard jobs. Furthermore, it allows to define custom jobs as well.

If quartz-jobs-custom.xml is defined in base and within an environment, the files are not merged in any way. Only the most specific file will be selected, which is the environment specific version if the according environment name is requested at runtime.

Use the following template to modify or add job configuration of your project:

quartz-jobs-custom.xml

<?xml version="1.0" encoding="utf-8"?>
<job-scheduling-data
    xmlns="http://www.quartz-scheduler.org/xml/JobSchedulingData"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.quartz-scheduler.org/xml/JobSchedulingData http://www.quartz-scheduler.org/xml/job_scheduling_data_2_0.xsd"
    version="2.0">
  <!--
      All job information is held in RAM only, it is not persisted anywhere.
      Additionally, this file will never change during runtime, since it is part of the Docker image (product- or project-image).
      Hence, all pre-processing-commands and processing-directives have very limited impact.
      When the IOM application server starts, there cannot be any jobs that could be deleted or overwritten.
      But this file is loaded after quartz-jobs-cluster.xml, which defines the standard IOM jobs. This makes it possible to
      * Delete standard jobs by using the according pre-processing-command (see http://www.quartz-scheduler.org/xml/job_scheduling_data_2_0.xsd).
      * Overwrite standard jobs by redefining them here.
      * Simply add own jobs. In this case, it is a good idea to assign them to group CUSTOM.
  -->
  <pre-processing-commands>
    <!-- clear all jobs and trigger of group CUSTOM -->
    <delete-jobs-in-group>CUSTOM</delete-jobs-in-group>
    <delete-triggers-in-group>CUSTOM</delete-triggers-in-group>
  </pre-processing-commands>
  <processing-directives>
    <!-- enable overwriting of existing jobs -->
    <overwrite-existing-data>true</overwrite-existing-data>
    <ignore-duplicates>false</ignore-duplicates>
  </processing-directives>
  <schedule>
    <!-- add custom jobs and triggers here -->
  </schedule>
</job-scheduling-data>

quartz-jobs-custom.xml supports the template variable ${platform.version}, which is automatically replaced by the current version of IOM. This mechanism should help you to lower maintenance efforts when referencing resources of the IOM platform.

All the standard Quartz jobs defined by IOM only trigger tasks defined by application Control, which is a singleton application running on one IOM application server only. However, the Quartz sub-system itself is rolled out and activated on all IOM application servers. Hence, using the Quartz sub-system does not restrict execution of jobs to a single IOM application server. See Guide - Intershop Order Management - Technical Overview.

initSystem.project.cluster.cli

Some configuration settings cannot be changed by project.cluster.properties, since more sophisticated CLI code is required. In this case, projects can place Wildfly CLI code directly into the initSystem.project.cluster.cli file at the custom properties directory structure.

Intershop does not recommend this kind of project customization/configuration, since it will make upgrades much more difficult. You cannot rely on sub-systems, which are bundled with the current version of Wildfly. When upgrading IOM, the Wildfly version might be upgraded as well, and with it the sub-systems bundled with Wildfly might change too.

For more information, please see https://docs.wildfly.org/26/Admin_Guide.html.

initSystem.project.cluster.cli does not support environment specific settings, it can only be defined globally within the base settings.

Directory Structure

The root directory holding custom properties is named etc reflecting the name of the according configuration directory of the IOM product. Within etc the well-known directory structure for environment-specific settings is used. Files within the base directory are applied to all installations, whereas files in the env directory are applied only if the environment matches.

Please be aware that only quartz-jobs-custom.xml supports environment specific settings!

src/etc/
├── base/
│   ├── cluster.properties
│   ├── quartz-jobs-custom.xml
│   ├── initSystem.project.cluster.cli
│   └── project.cluster.properties
└── env/
    ├── <env-name 1>/
    │   └── quartz-jobs-custom.xml
    ├── <env-name 2>/
    │   └── quartz-jobs-custom.xml
    └── ...

Development Scope

IOM Development Environment (devenv-4-iom) provides a process for execution of cli scripts. The environment that should be used by devenv-4-iom is set by configuration parameter PROJECT_ENV_NAME.

Runtime Scope

During runtime the configuration is selected by the Helm parameter project.envName, see IOM Helm Charts.

Test Data

Overview

Projects may require test data to be automatically loaded into IOM, e.g. test systems require this behavior. Therefore, it is possible to manage test data in IOM standard projects too. Test data might be loaded for specific environments (e.g. test systems) only or to systems of any environment. Again, the familiar directory structure is used to distinguish between these options.

Directory Structure

The root directory holding the test data is named test-data. Within test-data the well known directory structure for environment-specific settings is used. Files within the base directory are copied to all installations, whereas files in the env directory are copied only if the environment matches.

src/test-data/
├── base/
│   ├── <import file 1>
│   └── ...
└── env/
    ├── <env-name 1>/
    │   ├── <import file 1>
    │   └── ...
    ├── <env-name 2>/
    │    └── ...
    └── ...

Development Scope

Test data are loaded into IOM Development Environment (devenv-4-iom) if they are part of the custom IOM image and the according config variables are set (PROJECT_ENV_NAME has to be set to an environment containing any test-data and PROJECT_IMPORT_TEST_DATA has to be set to true).

Runtime Scope

During runtime the environment is selected by the Helm parameter project.envName. Additionally project.importTestData has to be set to true, see IOM Helm Charts.

Project-Files

Overview

Project-files provides a generic directory structure to add files and directories required by projects. The content of project-files will be copied recursively to $OMS_VAR/project-files.

An example for usage of project-fileshttps://github.com/intershop/devenv-4-iom/blob/main/doc/04_development_process.md#run-soap-tests are public keys, required to automate file-transfer to external partners. Public key files have to be referenced within the file-system by the SQL-configuration. Since the structure of IOM is not fixed, SQL-configuration have to be able to determine their position within the file system in a flexible way. For example, if the keys are all placed within the sub-directory public-keys located in project-files, the files will be copied to $OMS_VAR/project-files/public-keys. The SQL configuration uses the according system property to create a valid reference to a key-file: "${is.oms.dir.var}/project-files/public-keys/<file 1>".

Directory Structure

Project-files does not support environment-specific differences. The complete directory structure is copied recursively on all environments to $OMS_VAR/project-files.

src/project-files/
├── <custom dir>/
│   ├── <custom dir>/
│   │   ├── <custom file>
│   │   └── <custom file>
│   └── <custom dir>
│       ├── <custom dir>/
│       │   ├── <custom file>
│       │   ├── <custom file>
│       │   └── ...
│       └── ...
└── ...

Development Scope

Project files are available in IOM Development Environment (devenv-4-iom) if they are part of the custom IOM image.

Configuration of Logging

Overview

The following box shows the default logging configuration of IOM. This configuration helps to understand the main concepts of logging in IOM:

  • There are different console-handlers logging to stdout. All these handlers use a common formatter named JSON. The main reason for the existence of different log-handlers is the ability to control the log level of each handler separately by Docker environment variables.

  • Each handler is responsible for logging output of different Java packages. They can be seen in the second part of the configuration, which assigns Java package names to log-handlers.

  • The CONSOLE handler has no explicit assignments of Java packages. This handler is assigned to root-loggers, which do not need assignments. Instead, this log-handler handles all unassigned java-packages too.

  • Another handler without package assignments is CUSTOMIZATION. In difference to CONSOLE, this handler will not log any messages as long as no Java packages are assigned. The assignment of Java packages has to be done in project configuration and will be described in more detail below.

default logging configuration of IOM

#-------------------------------------------------------------------------------
# configure predifined log-handlers
#-------------------------------------------------------------------------------

/subsystem=logging/console-handler=CONSOLE:       named-formatter="JSON", level="${env.OMS_LOGLEVEL_CONSOLE}"
/subsystem=logging/console-handler=IOM:           named-formatter="JSON", level="${env.OMS_LOGLEVEL_IOM}"
/subsystem=logging/console-handler=HIBERNATE:     named-formatter="JSON", level="${env.OMS_LOGLEVEL_HIBERNATE}"
/subsystem=logging/console-handler=QUARTZ:        named-formatter="JSON", level="${env.OMS_LOGLEVEL_QUARTZ}"
/subsystem=logging/console-handler=CUSTOMIZATION: named-formatter="JSON", level="${env.OMS_LOGLEVEL_CUSTOMIZATION}"

#-------------------------------------------------------------------------------
# assign java-packages to log-handlers
#-------------------------------------------------------------------------------

/subsystem=logging/logger=bakery:                    handlers=[IOM],       use-parent-handlers="false", level="ALL"
/subsystem=logging/logger=com.intershop.oms:         handlers=[IOM],       use-parent-handlers="false", level="ALL"
/subsystem=logging/logger=com.theberlinbakery:       handlers=[IOM],       use-parent-handlers="false", level="ALL"
/subsystem=logging/logger=org.jboss.ejb3.invocation: handlers=[IOM],       use-parent-handlers="false", level="ALL"
/subsystem=logging/logger=org.hibernate:             handlers=[HIBERNATE], use-parent-handlers="false", level="ALL"
/subsystem=logging/logger=org.quartz:                handlers=[QUARTZ],    use-parent-handlers="false", level="ALL"

Runtime Scope

Centralized Controlling of Log Level of Customization Artifact

The simplest logging configuration for projects can be realized by assigning all Java packages of the customization artifact to the CUSTOMIZATION log-handler. When doing so, the log level of the customization artifact can be controlled at runtime by setting the Helm parameter log.level.customization.

The assignment of the Java packages belonging to the customization artifact is realized by adding additional entries to project.cluster.properties.

The following box shows a configuration example. You can apply these settings to your own project. To do so, replace the Java package names with the names used in your project.

project.cluster.properties

/subsystem=logging/logger=com.my_company.iom_customization: handlers=[CUSTOMIZATION], use-parent-handlers="false", level="ALL"

Use Different Log Levels for Different Packages of Customization Artifact

If you want to use different log levels for different Java packages of your customization artifact, you have to use different logger configurations for the packages you want to log with different levels. The Helm parameter log.level.customization defines the lowest log level to be logged. In combination with the configuration shown below, the logging system will show the following behavior:

log.level.customization pkg1 FATAL pkg1 ERROR pkg1 WARN pkg1 INFO pkg1 DEBUG pkg1 TRACE pkg2 FATAL pkg2 ERROR pkg2 WARN pkg2 INFO pkg2 DEBUG pkg2 TRACE
FATAL x           x          
ERROR x x         x x        
WARN x x x       x x x      
INFO x x x x                
DEBUG x x x x x              
TRACE x x x x x x            

src/etc/base/project.cluster.properties

/subsystem=logging/logger=com.my_company.iom_customization.pkg1: handlers=[CUSTOMIZATION], use-parent-handlers="false", level="ALL"
/subsystem=logging/logger=com.my_company.iom_customization.pkg2: handlers=[CUSTOMIZATION], use-parent-handlers="false", level="WARN"

Attention! Please be aware that you cannot define different log-levels at runtime for the CUSTOMIZATION handler! There is only one variable (log.level.customization) that defines the log-level to be used!

Wildfly Admin Guide see also: (https://docs.wildfly.org/26/Admin_Guide.html#Logging)

Development Scope

When using IOM Development Environment (devenv-4-iom), the log-level of the customization can be set by configuration variable OMS_LOGLEVEL_CUSTOMIZATION.

⚠️ **GitHub.com Fallback** ⚠️