Server Sizing and Tuning Guidelines - TADDM/taddm-wiki GitHub Wiki
IBM® Tivoli Application Discovery and Dependency Manager (TADDM) contains a number of key customer driven functional enhancements including the following items:
- Reconciliation
- Standardization
- Merging
- Pluggable interface for sensors
- Model Extensions
- Stack sensors (credential less sensor)
- Discovery Profiles
The guidelines and recommendations in this paper apply to all releases of TADDM version 7.
Because enterprise data center environments vary, IBM defines the concept of a Server Equivalent to normalize and present a standard set of performance and scale metrics.
Server Equivalent (SE): A representative unit of IT infrastructure is defined as a computer system (with standard configurations; operating system, network interfaces, storage interfaces) installed with server software such as a database (such as DB2® or Oracle), a Web server (such as Apache or IPlanet) or an application server (such as WebSphere® or WebLogic). A SE also accounts for network, storage and other subsystems that provide services to the optimal functioning of the server. Each SE consists of a number of Configuration Items (CIs).
Configuration Item (CI): As defined by ITIL, a CI is any component that is under the control of Configuration Management and therefore subject to formal Change Control. Each CI in the CMDB has a persistent object and change history associated with it. Examples of a CI are a computer system, an operating system, L2 interface, and database buffer pool size. A Server Equivalent consists of approximately 200 CIs.
The following summary is a guideline for selecting the appropriate hardware resources.
- You should use a fast multiprocessor system for the TADDM Application Server, particularly if it shares the same system with the database server. TADDM, DB2 databases, and Oracle databases are all enabled to take advantage of multiple processors and parallel operations.
The following general guidelines can be used to estimate the processor, memory, and disk space requirements for your TADDM implementation. All guidelines are minimum specifications. These are general guidelines. Other factors, such as number of users canl impact server utilization. For these guidelines, the TADDM and database server are on separate systems.
Regarding the disk space guideline: The product installation guide states that 100 GB of disk space are required.
That number was published because many customers were allocating insufficient space for TADDM, particularly for the logging and tracing of information. If you prefer, you can use the formulas that follow to estimate disk space requirements, paying particular attention to growth, TADDM and database logging, and so on. If you do not use these formulas, continue to use the 100 GB recommendation.
Regarding the processor guideline: The product installation guide states a minimum of 2 processors on the database server. That number was published prior to any method or other guidelines regarding processor numbers and speed. If you prefer, you can use the guidelines that follow to estimate processor requirements. If you do not use these guidelines, continue to use the 2 processor minimum for the database server.
TADDM: A robust application mapping and discovery tool that automatically gathers an inventory of all applications and dependencies, helps you understand configurations, and helps prove compliance. TADDM includes detailed reports and auditing tools.
For the purposes of sizing, the following categories of TADDM servers is used, based on Server Equivalents:
- Small: up to 2 000 SEs
- Medium: 2 000 - 5 000 SEs
- Large: 5,000 - 10 000 SEs
- Enterprise: > 10 000 SEs
Function: An instance of TADDM (including discovery, analytics and database).
Processor requirements: - 2 GHz (minimum), 3 GHz (or faster) recommended
- Small: 2 processors
- Medium: 3 processors
- Large: 4 processors
Memory requirements:
- Small: 4 GB
- Medium: 6 GB
- Large: 8 GB
Disk requirements:
- 5 GB minimum: includes product installation
- 50 GB minimum additional space: This might be required for DLA books, additional logging and tracing requirements, and so on.
Function: The database that a domain server uses to store topology and configuration data, which is populated using Sensors, DLAs or the TADDM API. Most customers, particularly large enterprise clients, keep their databases, including the TADDM database, on a separate database server.
Processor requirements: - 2 GHz (minimum), 3 GHz (or faster) recommended
- Small: 1 processor
- Medium: 2 processors
- Large: 2 processors
Memory requirements: (minimum)
- Small: 1 GB
- Medium: 2 GB
- Large: 3 GB
Disk requirements:
Database disk space requirements include space for the following components:
- system catalog
- tables
- indexes
- logs
- temp space (for sorts, joins,and so on)
- backup space
Disk space and disk drive requirements for a database server are not a function of only disk capacity. As stated in the Hardware section of this paper, consideration must also be given for I/O operations.
- Disk drive requirements: 2 (minimum), 3 (or more) recommended
- Disk space requirements: See Initial Disk Space Calculation to calculate estimated disk space requirements.
- Initial disk space required for the database logs; needed to create the TADDM schema: 160 MB
Function: An instance of the eCMDB is used to link together one or more Domain Servers.
Processor requirements: 2 GHz (minimum), 3 GHz (or faster) recommended
- Enterprise: 4 processors
Memory requirements:
- Enterprise: 8 GB
Disk requirements:
- 5 GB minimum: This includes product installation.
- 50 GB minimum additional space: This might be required for DLA books, additional logging and tracing requirements, and so on.
Function: The database that the eCMDB uses to store topology and configuration data, which is populated using synchronization with one or more Domain Servers. Most customers, particularly large enterprise clients, keep their databases, including the TADDM database, on a separate database server.
Processor requirements: 2 GHz (minimum), 3 GHz (or faster) recommended
- Enterprise: 4 processors
Memory requirements: (minimum)
- Enterprise: 4 GB
Disk requirements:
Database disk space requirements include space for the following components:
- system catalog
- tables
- indexes
- logs
- temp space ( for sorts, joins, and more)
- backup space
Disk space and disk drive requirements for a database server are not a function of only disk capacity. As stated in the Hardware section of this paper, consideration must be given for I/O operations also.
- Disk drive requirements: 2 (minimum), 3 (or more) recommended
- Disk space requirements: See Initial Disk Space Calculation to calculate estimated disk space requirements.
- Initial disk space required for the database logs; need to create the TADDM schema: 160 MB
You can use the following formulas to estimate the Initial amount of disk space that is required for your TADDM implementation. These estimates are based on Level 3 discovery type data. Depending on the breadth and depth of data in your environment, the disk space requirements can change. You can make your estimation based on the number of Configuration Items or Server Equivalents.
- CI No = Number of Configuration Items
- @ 4000 bytes per CI
- SE = Number of Server Equivalents
- 200 CIs per SE
- @800 000 bytes per SE
- CI - RDS = Amount of Raw Disk Space for CIs without overhead
- CI No x 4000
- SE - RDS = Amount of Raw Disk Space for SEs without overhead
- SE x 800 000
- TDS = Total Disk Space with overhead
- Use one of the following two formulas:
- (CI - RDS) x 1.75
- (SE - RDS) x 1.75
This includes overhead for temporary space and more.
- CHS = Change History Disk Space
- This is the amount of space by which the database grows weekly, over and above the initial disk allocation, depending on the frequency of discovery.
- TDS x 1.1 (an increase of 10%)
Important: The space requirements increase when additional data is discovered or loaded, or if the customer uses the TADDM versioning feature of.
Disk Space Calculation examples:
Example Large Domain: CI
- CI No = 1 200 000
- CI-RDS = 4 800 000 000
- CI No x 4000
- TDS = 8 400 000 000
- RDS x 1.75
- CHS = 840 000 000
- TDS x 1.1
Example Domain: SE
- SE No = 5500
- SE-RDS = 4 400 000 000
- SE No x 800 000
- TDS = 7 700 000 000
- (SE-RDS) x 1.75
- CHS = 770 000 000
- TDS x 1.1
The following is a summary guideline for tuning Windows systems:
- If possible, configure your Windows system to use the /3GB switch in the boot.ini file. This assumes the correct version of the Windows operating system and at least 4GB of memory. With this configuration, you can allocate more memory resources, such as Java™ heaps, buffer pool, package cache, and so on.
- If possible, locate the system paging file on a separate disk drive. It should not be on the same drive as the operating system.
- On your database and application server, configure the server to maximize data throughput for networking applications.
The network can influence the overall performance of your application, but usually manifests itself when there is a delay in the following situations:
- The time between when a client system sends a request to the server and when the server receives this request
- The time between when the server system sends data back to the client system and the client system receives the data
After a system is implemented, the network should be monitored, to assure that its bandwidth is not being consumed more than 50%.
Tuning the database is critical to the efficient operation of any computer system. The default database configurations that are provided with the product are sufficient for proof of concept, proof of technology, and small pilot implementations of TADDM.
If your organization does not have the skills available to monitor and tune your database systems, consider contacting IBM Support or another vendor for resources to perform this important task.
- Do not try to limit the number of physical disk drives available to your database based on storage capacity alone.
- Ideally, the following components should be place on separate disk drives / arrays:
- Application data (tables and indexes)..
- Database logs
- Database temporary space: used for sort and join operations
- Use the fastest disks available for your log files.
- Enable Asynchronous I/O at the operating system level.
The following are some guidelines for tuning DB2 databases. Refer to the following documents in the publications section for additional reference material:
- Database Performance Tuning on AIX
- Relational Database Design and Performance Tuning for DB2 Database Servers
- Appendix B in the Administration Guide
-
Regular maintenance is a critical factor in the performance of any database environment. For DB2 databases, this involves running the REORG and RUNSTATS utilities, in that order, on the database tables.
(Critical) Running the REORGs and RUNSTATS utilities are critically important for optimal performance with DB2 databases. After the database is populated, this should be done on a regularly scheduled basis, for example, weekly*.* A regularly scheduled maintenance plan is essential to maintain peak performance of your system.
-
REORG: After many changes to table data caused by the insertion, deletion, and updating of variable length columns activity, logically sequential data might be on non-sequential physical data pages so that the database manager must perform additional read operations to access data. You can reorganize DB2 tables to eliminate fragmentation and reclaim space using the REORG command.
-
You can generate the REORG commands by running the following SQL statement on the DB2 database server
select 'reorg table '||CAST(RTRIM(creator) AS VARCHAR(40))||'."'||substr(name,1,60)||'" ;' from sysibm.systables where creator = '' and type = 'T' and name not in ('CHANGE_SEQ_ID') order by 1 ;
where is the value from com.collation.db.user=
This will generate all of the REORG TABLE commands that you need to run.
-
To Run this procedure do the following steps:
-
Copy the SQL statement above to a file, for example, temp.sq.
-
On the database server, on a DB2 command line, connect to the DB and run the following commands:
db2 -x -tf temp.sql > cmdbreorg.sql
db2 -tvf cmdbreorg.sql > cmdbreorg.out
-
-
-
RUNSTATS: The DB2 optimizer uses information and statistics in the DB2 catalog to determine the best access to the database, based on the query that is provided. Statistical information is collected for specific tables and indexes in the local database when you run the RUNSTATS utility. When significant numbers of table rows are added or removed, or if data in columns for which you collect statistics is updated, run the RUNSTATS command again to update the statistics.
-
You should first ensure that your TADDM database tables are populated before running the RUNSTATS command on the database. This can occur by way of discovery, bulk load, or by using the API. Running the RUNSTATS command on your database tables before there is data in them results in the catalog statistics reflecting 0 rows in the tables. This generally causes the DB2 optimizer to perform table scans when accessing the tables, and to not use the available indexes, resulting in poor performance.
-
The DB2 product provide functions to automate database maintenance by way of database configuration parameters. You need to evaluate their use in your environment and determine if they fit into your database maintenance process, but typically, you want more control over when database maintenance activities occur. These are some of the functions:
-
Automatic maintenance (AUTO_MAINT): This parameter is the parent of all the other automatic maintenance database configuration parameters (auto_db_backup, auto_tbl_maint, auto_runstats, auto_stats_prof, auto_prof_upd, and auto_reorg). When this parameter is disabled, all of its child parameters are also disabled, but their settings, as recorded in the database configuration file, do not change. When this parent parameter is enabled, recorded values for its child parameters take effect. In this way, automatic maintenance can be enabled or disabled globally.
- The default for DB2 V8 is OFF
- The default for DB2 V9 is ON
- (Important) Set this parameter to OFF until you populate your database tables as previously explained.
UPDATE db cfg for <dbname> using AUTO_MAINT OFF
-
Automatic table maintenance (AUTO_TBL_MAINT): This parameter is the parent of all table maintenance parameters (auto_runstats, auto_stats_prof, auto_prof_upd, and auto_reorg). When this parameter is disabled, all of its child parameters are also disabled, but their settings, as recorded in the database configuration file, do not change. When this parent parameter is enabled, recorded values for its child parameters take effect. In this way, table maintenance can be enabled or disabled globally
-
Automatic runstats (AUTO_RUNSTATS): This automated table maintenance parameter enables or disables automatic table runstats operations for a database. A runstats policy (a defined set of rules or guidelines) can be used to specify the automated behavior. To be enabled, this parameter must be set to ON, and its parent parameters must also be enabled
-
-
There is a program in the <TADDM_install_dir>/dist/support/bin directory called gen_db_stats.jy. This program outputs the database commands for either an Oracle or DB2 database to update the statistics on the TADDM tables. The following example shows how the program is used:
-
cd <TADDM_install_dir>/dist/support/bin
-
Run the following command:
./gen_db_stats.jy >/TADDM_table_stats.sql
where is a directory where this file can be created.
-
Copy the file to the database server and run the following command:
db2 -tvf /TADDM_table_stats.sql
- (This is for only a DB2 database) There is an additional performance fix that is used to modify some of the statistics that are generated by the RUNSTATS command. There is a program in the <TADDM_install_dir>/dist/bin directory called db2updatestats.sh (for Unix and Linux systems), or db2updatestats.bat (for Windows systems). This program should be run immediately after the prior procedure (c.) or as part of your standard RUNSTATS procedure. The following example shows how the program is used:
-
cd <TADDM_install_dir>/dist/bin
-
Run the following command:
./db2updatestats.sh
-
-
-
A buffer pool is memory used to cache table and index data pages as they are being read from disk, or being modified. The buffer pool improves database system performance by allowing data to be accessed from memory instead of from disk. Because memory access is much faster than disk access, the less often the database manager needs to read from or write to a disk, the better the performance. Because most data manipulation takes place in buffer pools, configuring buffer pools is the single most important tuning area. Only large objects and long field data are not manipulated in a buffer pool.
-
Modify the buffer pool sizes based on the amount of available system memory that you have and the amount of data that is in your database. The default buffer pool sizes provided with the TADDM database are generally not large enough for production environments. There is no definitive answer to the question of how much memory you should dedicate to the buffer pool. Generally, more memory is better. Because it is a memory resource, its use has to be considered along with all other applications and processes that are running on a server. You can use the DB2 SNAPSHOT monitor to determine buffer pool usage and hit ratios. If an increase to the size of the buffer pools causes system paging, you should lower the size to eliminate paging.
-
Buffer pool size guidelines
-
< 500K CIs
4K - 50 000
8K - 5000
32K - 1000 -
500K - 1M CIs
4K - 90 000
8K - 12 000
32K - 1500 -
> 1M CIs - eCMDB
4K - 150 000
8K - 24 000
32K - 2500
-
< 500K CIs
-
For example, you can implement the buffer pool changes as follows (this might require a database restart):
- ALTER BUFFERPOOL IBMDEFAULTBP SIZE 90000
- ALTER BUFFERPOOL BUF8K SIZE 12000
- ALTER BUFFERPOOL BUF32K SIZE 1500
-
-
The following list includes important DB2 database configuration parameters that might need to be adjusted, depending on data volumes, usage, and deployment configuration:
- DBHEAP
- NUM_IOCLEANERS
- NUM_IOSERVERS
- LOCKLIST
-
The following list includes important DB2 database manager parameters that might need to be adjusted, depending on data volumes, usage, and deployment configuration:
- ASLHEAPSZ
- INTRA_PARALLEL
- QUERY_HEAP_SZ
- RQRIOBLK
-
Set the following DB2 Registry Variables:
-
DB2_PARALLEL_IO
This enables parallel I/O operations
This is only applicable if your tables pace containers and hardware are configured appropriately.
-
DB2NTNOCACHE=ON - (Windows only)
Turn this on to disable file system caching by the Windows operating system.
-
DB2_USE_ALTERNATE_PAGE_CLEANING
-
-
Database logs
- Tune the Log File Size (logfilsiz) database configuration parameter so that you are not creating excessive log files.
- Use Log Retain logging to ensure recoverability of your database.
- Mirror your log files to ensure availability of your database system.
- Modify the size of the database configuration Log Buffer parameter (logbufsz) based on the volume of activity. This parameter specifies the amount of the database heap to use as a buffer for log records before writing these records to disk. Buffering the log records results in more efficient logging file I/O because the log records are written to disk less frequently, and more log records are written at a time.
-
Modify the PREFETCHSIZE on the table spaces based on the following formula. An ideal size is a multiple of the extent size, the number of physical disks under each container (if a RAID device is used) and the number of table space containers. The extent size should be fairly small, with a good value being in the range of 8 - 32 pages. For example, for a table space on a RAID device with 5 physical disks, 1 container (suggested for RAID devices) and an EXTENTSIZE of 32, the PREFETCHSIZE should be set to 160 (32 x 5 x 1).
The following guidelines are for tuning Oracle databases. Refer to "Database Performance Tuning on AIX" in the publications section for additional reference material on tuning Oracle databases.
Regular maintenance is a critical factor in the performance of any database environment. For Oracle databases, this involves running the dbms_stats package on the database tables. Oracle uses a cost based optimizer. The cost based optimizer needs data to decide on the access plan, and this data is generated by the dbms_stats package. Oracle databases depend on data about the tables and indexes. Without this data, the optimizer has to do an estimation. For Oracle, this can also be accomplished by enabling automatic optimizer statistics collection.
(Critical) Rebuilding the Indexes and running the dbms_stats package is critically important for optimal performance with Oracle databases. After the database is populated, this should be done on a regularly scheduled basis, for example, weekly. A regularly scheduled maintenance plan is essential in order to maintain peak performance of your system.
-
REBUILD INDEX: After many changes to table data, caused by insertion, deletion, and updating activity, logically sequential data might be on non-sequential physical data pages, so that the database manager must perform additional read operations to access data. You can rebuild the indexes to help improve SQL performance.
-
You can generate the REBUILD INDEX commands by running the following SQL statement on the Oracle database:
select 'alter index .'||index_name||' rebuild tablespace '||tablespace_name||';' from dba_indexes where owner = '';
where is the value from com.collation.db.user=
This generate all of the ALTER INDEX commands that you need to run
-
Run the commands in SQLPLUS or some comparable facility.
Rebuilding all the indexes on a large database will take 15 - 20 minutes
-
-
Automatic Optimizer Statistics Collection: It is recommended to take advantage of the automatic statistics collection for Oracle.
-
DBMS_STATS: (Note: If Automatic Optimer Statics Collection is enabled then manual statistics collection is not necessary) You use the Oracle RDBMS to collect many different kinds of statistics as an aid to improving performance. The optimizer uses information and statistics in the dictionary to determine the best access to the database based on the query provided. Statistical information is collected for specific tables and indexes in the local database when you run the DBMS_STATS command. When significant numbers of table rows are added or removed, or if data in columns for which you collect statistics is updated, run the DBMS_STATS command again to update the statistics.
-
There is a program in the <TADDM_install_dir>/dist/support/bin directory called gen_db_stats.jy. This program outputs the database commands for either
Oracle or DB2 databases to update the statistics on the TADDM tables. The following example shows how the program is used:
-
cd <TADDM_install_dir>/dist/support/bin
-
./gen_db_stats.jy >/TADDM_table_stats.sql
where is a directory where this file can be created.
-
After this is complete, copy the file to the database server and run the following command:
Run the commands in SQLPLUS or some comparable facility
-
-
-
Automatic Memory Management: Oracle 11g introduces Automatic Memory Management. It is recommended to use this feature of Oracle and allow the database to automatically adjust it's own memory structures for buffer pools, etc. When using automatic memory management, ensure that you configure the SGA_TARGET and SGA_MAX_SIZE parameters properly so that the database has adequate physical memory available. These values can be queried from TADDM using the following command:
dist/bin/dbquery.sh "SELECT name, value FROM v\$parameter WHERE UPPER(name) = 'SGA_TARGET' OR UPPER(NAME) = 'SGA_MAX_SIZE'"
Note that the view name is actually v$parameter but the backslash/escape is necessary during Linux shell execution.
Oracle also has a table with advice on how to set these values. The following command will give the advice:
dist/bin/dbquery.sh "SELECT sga_size, sga_size_factor, estd_db_time_factor FROM v\$sga_target_advice ORDER BY sga_size ASC"
Here is sample output from this command:
"SGA_SIZE" "SGA_SIZE_FACTOR" "ESTD_DB_TIME_FACTOR" "2304" "0.375" "98.1152" "3072" "0.5" "13.8134" "3840" "0.625" "6.6657" "4608" "0.75" "2.1537" "5376" "0.875" "1.1724" "6144" "1" "1" "6912" "1.125" "0.9702" "7680" "1.25" "0.9549" "8448" "1.375" "0.945" "9216" "1.5" "0.9433" "9984" "1.625" "0.9431" "10752" "1.75" "0.9429" "11520" "1.875" "0.9428" "12288" "2" "0.9428"
The SGA_SIZE_FACTOR of "1" is the current setting, and the EST_DB_TIME_FACTOR will tell you performance improvement you will see by adjusting the SGA values. In the example above, if the SGA_SIZE was adjusted from 6GB down to 2GB the database would theoretically run 98 times slower. On the flip side, you can see that increasing SGA from 6GB to 12GB should not increase performance by very much at all, a theoretical 6%. A good starting value for the SGA is 8GB for a large environment, but this value needs to be monitored and moved upwards when needed.
-
Buffer pools: (Note: If Automatic Memory Management is being used then manual buffer pool configuration is not necessary) A buffer pool or buffer cache is a memory structure inside Oracle System Global Area (SGA) for each instance. This buffer cache is used for caching data blocks in the memory. Accessing data from the memory is significantly faster than accessing data from disk. The goal of block buffer tuning is to efficiently cache frequently used data blocks in the buffer cache (SGA) and provide faster access to data. Tuning block buffer is a key task in any Oracle tuning initiative and is a part of the ongoing tuning and monitoring of production databases. The Oracle product maintains its own buffer cache inside the SGA for each instance. A properly sized buffer cache can usually yield a cache hit ratio over 90%, which means that nine requests out of ten are satisfied without going to disk. If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O results. If a buffer cache is too big, then parts of the buffer cache will be underutilized and memory resources are wasted.
- Buffer pool size guidelines (db_cache_size)
- < 500K CIs - 38 000
- 500K- 1M CIs - 60 000
- > 1M CIs - eCMDB - 95 000
- Buffer pool size guidelines (db_cache_size)
Refer to the publications section for additional reference material, and in particular to the "Tuning Discovery Performance" document available on the Tivoli OPAL Web site.
Most of the TADDM modifiable parameters are contained in the collation.properties file As its name implies, this is a Java properties file with a list of name, value pairs separated by an equal sign (=).
<TADDM_install_dir>/dist/etc/collation.properties
The attribute discovery rate is the area with the most potential for tuning. In this file, the property with the most impact on performance is the number of discovery worker threads:
# Max number of discover worker threads
com.collation.discover.dwcount=16
Provided the server has sufficient spare capacity, this setting can be increased. This allows more sensors to run in parallel.
By observing a discovery run and comparing the number of in progress sensors that are in the started stage versus the number of in Progress sensors in the discovered or storing stages, an assessment can be made on whether attribute discovery is faster or slower than attribute storage for a particular environment. As with all changes to the collation.properties file, the server must be restarted for the change to take effect.
The second major area for tuning is storage. Storage of the discovery results is the discovery performance bottleneck, if the number of sensors in the storing state is approximately the value of the property:
com.collation.discover.observer.topopumpcount
This property is the number of parallel storage threads. It is one of the main settings for controlling discovery storage performance. It must, however, be adjusted carefully.
There are three distinct phases for loading data by way of the Bulk Loader:
-
Analyze the objects and relationships to determine the graphs in the data.
Typically, 1 - 5% of execution time
-
Construct model objects and build graphs
Typically, 2 - 5% of execution time
-
Pass the data to the API server.
Typically, 90 - 99% of execution time
There are two options for loading data:
-
Data can be loaded "one" record at a time. This is the default.
Files with errors must use default loading.
Files with extended attributes must use default loading
-
Data can be loaded "in bulk" (called graph writing).
Bulk loading with the graph-write option is significantly faster than running in the default mode. (Reference the Bulk Load measurements for details). An example of running with the graph-write option is as follows:
./loadidml.sh -g -f /home/confignia/testfiles/sample.xml
-g = buffer and pass blocks of data to API server
The following parameter in the bulkload.properties can be used to improve graph writing performance.
com.ibm.cdb.bulk.cachesize=800 (this is the default)
This parameter controls the number of objects to be processed in a single write operation when performing a graph write.
- Increasing this number improves performance at the risk of running out of memory either on the client or at the server. The number should only be altered when specific information is available to indicate that processing a file with a larger cache provides some benefit in performance.
- The cache size setting currently can be no larger than 2000.
TADDM consists of a number of Java based servers that perform various functions in the product. The following JVMs are run in support of TADDM:
- Topology
- EventsCore
- DiscoverAdmin
- Proxy
- Discover
- Gigaspaces
- Tomcat
When using the IBM implementation of JVM, the application should not be making any explicit garbage collection calls, for example, SystemGC(). This should be disabled using the DisableExplicitGC property for each JVM.
-Xdisableexplicitgc
Fragmentation of the Java heap can occur as the number of objects that are processed increases. There are a number of parameters that you can set to help reduce fragmentation in the heap.
A kCluster is an area of storage that is used exclusively for class blocks. It is large enough to hold 1280 entries. Each class block is 256 bytes long. This default value is usually too small and can lead to fragmentation of the heap. Set the kCluster parameter, -Xk, as follows to help reduce fragmentation of the heap. These are starting values and might have to be tuned in your environment. An analysis of a heap dump would be best to determine the ideal size.
- Topology: -Xk8300
- EventsCore: -Xk3500
- DiscoverAdmin: -Xk3200
- Proxy: -Xk5700
- Discover: -Xk3700
- Gigaspaces: -Xk3000
You can implement these changes in the collation.properties file by adding entries in the JVM Vendor Specific Settings section. For example, to implement these changes for the Topology server, add the following line:
#===================================
# JVM Vendor Specific Settings
#===================================
com.collation.Topology.jvmargs.ibm=-Xdisableexplicitgc -Xk8300
Another option for fragmentation issues is to allocate some space specifically for large objects; > 64K. You can do this with the -Xloratio parameter. For example:
-Xloratio0.2
This command reserves x% of the active Java heap (not x% of -Xmx but x% of the current size of the Java heap), to the allocation of large objects (=64 KB) only. If changed, then -Xmx should be changed to make sure that you do not reduce the size of the small object area. Again, an analysis of a heap dump would be best to determine the ideal setting for this parameter.
There are a few additional parameters that can be set that affect Java performance. To change an existing JVM option to a different value, you can do edit one of the following files:
- Edit the <TADDM_install_dir>/dist/deploy-tomcat/ROOT/WEB-INF/cmdb-context.xml file
- If eCMDB is in use, then the <TADDM_install_dir >/dist/deploytomcat/ROOT/WEB-INF/ecmdb-context.xml file should be edited instead.
To edit one of these files to change the settings for one of the TADDM services, first find the service in the file. Here is an example of the beginning of a service definition in the XML file:
<bean id="Discover" class="com.collation.platform.jini.ServiceLifecycle" initmethod="start" destroy-method="stop">
<property name="serviceName">
<value>Discover</value>
</property>
Within the definition there are some elements and attributes that control the JVM arguments. For example:
<property name="jvmArgs">
<value>-Xms8M;-Xmx512M;-Djava.nio.channels.spi.SelectorProvider=sun.nio.ch.PollSelectorProvider</value>
</property>
The JVM arguments can be set as a semicolon separated list in the <property name="jvmArgs"><value>
element.
When using the Sun JVM, the following suggested changes should be made.
You can implement these changes in the collation.properties file by adding entries in the JVM Vendor Specific Settings section. For example, to implement these changes for the Topology server, add the following line:
#===================================
# JVM Vendor Specific Settings
#===================================
com.collation.Topology.jvmargs.sun=-XX:+MaxPermSize=128M -XX:+DisableExplicitGC -X:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
The default settings in the collation.properties file for the GUI JVM Settings are as follows:
#===================================
# GUI JVM Settings
#===================================
com.collation.gui.initial.heap.size=128m
com.collation.gui.max.heap.size=512m
These settings are appropriate for a small TADDM Domain. For the purposes of sizing, the following categories of TADDM servers will be used (based on Server Equivalents)
- Small: up to 1000 SEs
- Medium: 1000 - 2500 SEs
- Large: 2500 - 5000 SEs
Increasing these values for medium and large environments gives you improved performance for some GUI operations. The following guidelines are provided.
Medium Environment:
com.collation.gui.initial.heap.size=256m
com.collation.gui.max.heap.size=768m
Large Environment:
com.collation.gui.initial.heap.size=512m
com.collation.gui.max.heap.size=1024m
The following books serve as reference material.
Information Units | Contents |
---|---|
Administration Guide: Performance SC09-4821 |
This book contains information about how to configure and tune your database environment to improve performance. This and all other DB2 publications are available at: http://www-306.ibm.com/software/data/db2/udb/support/manualsv8.html |
Database Performance Tuning on AIX SG24-5511-01 |
This IBM Redbook contains hints and tips from experts that work on RDBMS performance every day. It also provides introductions to general database layout concepts from a performance point of view, design and sizing guidelines, tuning recommendations, and performance and tuning information for DB2 UDB, Oracle, and IBM Informix databases. http://www.redbooks.ibm.com/redbooks/pdfs/sg245511.pdf |