@Javax.persistence.Entity, @Transactional and DynamicTable (DeadLock and LockwaitTimeout) - Yash-777/MyWorld GitHub Wiki

Log4jdbc to Log SQL Queries stackoverflow post

  1. Candrews com.integralblue
<dependency>
  <groupId>com.integralblue</groupId>
  <artifactId>log4jdbc-spring-boot-starter</artifactId>
  <version>2.0.0</version>
</dependency>
hibernate.type:TRACE;
hibernate.format_sql:true;
hibernate.show_sql:true;
logging.level.org.hibernate.SQL:DEBUG;
logging.level.org.hibernate.type:TRACE;

Loading class com.mysql.jdbc.Driver'. This is deprecated. The new driver class is com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.

you can set correct driver by the follwing properties.

log4jdbc.drivers=com.mysql.cj.jdbc.Driver  
log4jdbc.auto.load.popular.drivers=false
  1. org.bgee.log4jdbc-log4j2
`org.bgee.log4jdbc-log4j2` `net.disy.oss` [forked from org.bgee.log4jdbc-log4j2](https://github.com/DisyInformationssysteme/log4jdbc)

<dependency>
  <groupId>org.bgee.log4jdbc-log4j2</groupId>
  <artifactId>log4jdbc-log4j2-jdbc4.1</artifactId>
  <version>[INSERT VERSION HERE]</version>
</dependency>

<dependency>
  <groupId>net.disy.oss</groupId>
  <artifactId>log4jdbc</artifactId>
  <version>3.0.0</version>
</dependency>

A transaction is a set of logically related operations.

Transactions are everywhere/widely-used in today’s enterprise/business systems. A transaction is a collection of read/write operations succeeding only if all contained operations succeed.

DBMS Transaction States

States through which a transaction goes during its lifetime. These are the states which tell about the current state of the Transaction and also tell how we will further do the processing in the transactions. These states govern the rules which decide the fate of the transaction whether it will commit or abort.

They also use Transaction log. Transaction log is a file maintain by recovery management component to record all the activities of the transaction. After commit is done transaction log file is removed.

Operations Transaction States
image image

In a relational database, every SQL statement must execute in the scope of a transaction. Without defining the transaction boundaries explicitly, the database is going to use an implicit transaction which is wraps around every individual statement. The implicit transaction begins before the statement is executed and end (commit or rollback) after the statement is executed.
The implicit transaction mode is commonly known as autocommit.

For an enterprise application, the auto-commit mode is something you’d generally want to avoid since it has serious performance penalties, and it doesn’t allow you to include multiple DML operations in a single atomic Unit of Work.

Database transactions are defined by the four properties known as ACID.

Property Responsibility for maintaining properties DBMS
Atomicity Transaction Manager By this, we mean that either the entire transaction takes place at once or doesn’t happen at all. There is no midway i.e. transactions do not occur partially. Each transaction is considered as one unit and either runs to completion or is not executed at all. It involves the following two operations.
  1. Abort: If a transaction aborts, changes made to the database are not visible.
  2. Commit: If a transaction commits, changes made are visible.
Atomicity is also known as the ‘All or nothing rule’.
Consistency Application programmer This means that integrity constraints must be maintained so that the database is consistent before and after the transaction. It refers to the correctness of a database.
Isolation Concurrency Control Manager This property ensures that multiple transactions can occur concurrently without leading to the inconsistency of the database state. Transactions occur independently without interference. Changes occurring in a particular transaction will not be visible to any other transaction until that particular change in that transaction is written to memory or has been committed.
Durability Recovery Manager This property ensures that once the transaction has completed execution, the updates and modifications to the database are stored in and written to disk and they persist even if a system failure occurs. These updates now become permanent and are stored in non-volatile memory. The effects of the transaction, thus, are never lost.
If our system is suddenly affected by a system crash or a power outage, then all unfinished committed transactions may be replayed.

ACID Properties in DBMSgeeksforgeeks.org

Consider the following transaction T consisting of T1 and T2: Transfer of 100 from account X to account Y.

image

Example:

Atomicity

If the transaction fails after completion of T1 but before completion of T2.( say, after write(X) but before write(Y)), then the amount has been deducted from X but not added to Y. This results in an inconsistent database state. Therefore, the transaction must be executed in its entirety in order to ensure the correctness of the database state.

Consistency:

The total amount before and after the transaction must be maintained.
Total before T occurs = 500 + 200 = 700.
Total after T occurs = 400 + 300 = 700.
Therefore, the database is consistent. Inconsistency occurs in case T1 completes but T2 fails. As a result, T is incomplete.

Isolation:

Let X= 500, Y = 500.
Consider two transactions T and T”.

image

Suppose T has been executed till Read (Y) and then T’’ starts. As a result, interleaving of operations takes place due to which T’’ reads the correct value of X but the incorrect value of Y and sum computed by
T’’: (X+Y = 50, 000+500=50, 500) is thus not consistent with the sum at end of the transaction:
T: (X+Y = 50, 000 + 450 = 50, 450).
This results in database inconsistency, due to a loss of 50 units. Hence, transactions must take place in isolation and changes should be visible only after they have been made to the main memory.


All managed entity state transitions are translated to associated database statements when the current Persistence Context gets flushed. Hibernate’s flush behavior is not always as obvious as one might think.

Write-behind

Hibernate tries to defer the Persistence Context flushing up until the last possible moment. This strategy has been traditionally known as transactional write-behind.

The write-behind is more related to Hibernate flushing rather than any logical or physical transaction. During a transaction, the flush may occur multiple times.

The flushed changes are visible only for the current database transaction. Until the current transaction is committed, no change is visible by other concurrent transactions.

The persistence context, also known as the first level cache, acts as a buffer between the current entity state transitions and the database.

In caching theory, the write-behind synchronization requires that all changes happen against the cache, whose responsibility is to eventually synchronize with the backing store.


Hibernate Caching

Caching is all about application performance optimization and it sits between your application and the database to avoid the number of database hits as many as possible to give a better performance for performance critical applications.

One of the primary concerns of mappings between a database and our Java application is performance. This is the common concern of the all guys who working with hibernate and spent the more time in ORM tools for performance-enhancing changes to particular queries and retrievals.

Hibernate cache Image, stackoverflow
  • First Level Cache (L1 cache): Hibernate first level cache is associated with the Session object.
    - This cache only works at a session level, meaning each session object caches data independently, so there is no sharing of cached data across sessions, and the cached data is deleted when the session closes.
  • Second Level Cache (L2 cache): Hibernate Second Level cache is disabled by default but we can enable it through configuration. Ehcache as a plug-in second-level cache for Hibernate – Automatically cache common queries in memory to substantially lower latency.
    - A second-level cache improves application performance with regard to persistence for all sessions created with the same session factory. With a second-level cache, a request for an object can be served by the cache, even if the request is executed from multiple sessions, with much lower latency than if the request went to the database.
  • Query Cache: Hibernate can also cache result set of a query.
    - This makes the cache only useful for repeated queries in the same session.
image




Java Persistence API comes with a thorough concurrency control mechanism, supporting both implicit and explicit locking. The implicit locking mechanism is straightforward and it relies on:

  • Optimistic locking: Entity state changes can trigger a version incrementation
  • Row-level locking: Based on the current running transaction isolation level, the INSERT/UPDATE/DELETE statements may acquire exclusive row locks

@javax.persistence.Version (OptimisticLockException)

The following types are supported for version properties: int, Integer, short, Short, long, Long, java.sql.Timestamp.

@Entity
public class MyEntity implements Serializable {    
    // ...

    @Version
    @Column(name = "optlockversion"), columnDefinition = "integer DEFAULT 0", nullable = false)
    private long version = 0L;

    // ...
}

@Transactional Annotation in Spring: @EnableTransactionManagement

When you annotate a method with @Transactional in a Spring application, it means that the method will be executed within a transactional context. Transactions in the context of a database typically involve operations like opening a transaction, committing changes to the database on success, and rolling back changes on failure.

In summary, the @Transactional annotation in Spring helps manage database transactions. It ensures that changes are either committed or rolled back consistently, and it provides mechanisms to handle concurrency issues, such as deadlocks, in multi-transaction scenarios. The first-level cache plays a crucial role in maintaining the persistent state of entities during these transactions.

Let’s consider an example with two tables: UserTable (a standard entity) and UserDynamic (created dynamically). Assume both tables are involved in multiple transactions.

@Entity // ORM - first-level cache
public class UserTable {
    // Fields, getters, setters, etc.
}

// Assume UserDynamic is created dynamically (not a standard entity) UserDynamic_YYYYMM, UserDynamic_202301, UserDynamic_202302 ...
public class UserDynamic {  // Non-ORM so no cache
    // Fields, getters, setters, etc.
}

In a typical JPA (Java Persistence API) and Hibernate setup, the first-level cache is managed by the EntityManager, and it is used to track the state of entities that are annotated with @Entity. The first-level cache is not directly tied to the use of native SQL queries or the JDBC template.

When you perform DB operations using native SQL and the JDBC template, you are bypassing the Hibernate EntityManager, and Hibernate is not directly aware of the changes made through these operations. The first-level cache is usually associated with the EntityManager's management of entities and the JPA repository methods.

In the context of Hibernate, which is commonly used with Spring for ORM (Object-Relational Mapping), entities marked with @Entity are managed by the Hibernate Session and can participate in the first-level cache.

However, in the case of UserDynamic, which is not annotated with @Entity, it won't be automatically managed by Hibernate.

@Entity UserTable : If you are using native SQL queries or the JDBC template to interact directly with the database, you won't automatically benefit from the first-level cache. In this scenario, you are responsible for managing the state of the entities yourself, and you won't get the automatic caching and consistency features provided by the EntityManager's first-level cache.

Transaction Lifecycle

Open Transaction:

  • A transaction is started when the annotated method begins execution.

Successful Operation:

  • If all database operations within the transaction are successful, changes are committed to the database.
  • The changes made to entities (like @Entity classes, e.g., UserTable) are stored in the first-level cache.
  • The first-level cache is flushed to the database on successful commit.

Failed Operation:

  • If any operation within the transaction fails, a rollback is triggered.
  • The first-level cache, which holds the persistent state of entities, is reverted to its previous state before the transaction started.

DeadLock and LockwaitTimeout

-- These will normally show you the full sql for the query that is locking.
show full processlist;

-- Check your database transaction isolation level in MySQL: ( REPEATABLE READ is the InnoDB default )
SELECT @@GLOBAL.transaction_isolation, @@session.transaction_isolation, @@transaction_isolation;

SHOW VARIABLES LIKE 'innodb_buffer_pool_size'; -- 134217728

SHOW VARIABLES LIKE 'innodb_lock_wait_timeout'; --  By default, it’s set to 50 seconds.
SELECT @@GLOBAL.innodb_lock_wait_timeout, @@session.innodb_lock_wait_timeout, @@innodb_lock_wait_timeout;
-- You can modify the innodb_lock_wait_timeout value globally or per session.
SET GLOBAL  innodb_lock_wait_timeout = 50; -- 300 seconds (5 minutes)
SET SESSION innodb_lock_wait_timeout = 50;
SET         innodb_lock_wait_timeout = 50;

In Java, the @Transactional annotation is typically used to mark a method, class, or interface as transactional, indicating that the method (or methods within the class or interface) should be executed within a transaction context.

Important

Deadlock found when trying to get lock; try restarting transaction karbachinsky.medium.com

Handling Deadlocks in Multi-Transaction Scenarios

Deadlocks can occur when multiple transactions compete for the same resources (e.g., database rows) and end up blocking each other. To mitigate deadlocks, consider the following approaches:

  1. Optimistic Locking:

    • Use optimistic locking mechanisms (e.g., version-based locking or timestamp) to prevent concurrent updates from causing deadlocks.
    • In Spring Data JPA, you can achieve this by adding a @Version field to your entity class. This field is automatically incremented during updates.
    • When two transactions try to update the same entity concurrently, the one with the older version will fail, avoiding deadlocks.
  2. Transaction Isolation Levels:

    • Configure appropriate transaction isolation levels to control how transactions interact with each other.
    • Adjust Isolation Levels: Change the isolation level of transactions to reduce contention. Common isolation levels include READ_COMMITTED, REPEATABLE_READ, and SERIALIZABLE.
    • Choose the isolation level based on your application’s requirements and the likelihood of concurrent access.
  3. Avoid Long Transactions:

    • Keep transactions as short as possible to minimize the chances of deadlocks.
    • Long-running transactions increase the risk of conflicts with other transactions. Transaction Timeout: Set a timeout for transactions to prevent them from holding locks indefinitely.
  4. Retry Mechanisms:

    • Implement retry logic for failed transactions.
    • If a transaction encounters a deadlock, retry it after a short delay.
    • Be cautious not to create an infinite loop of retries.
RetryJdbcTemplate: Use retry to reattempt to get the lock of DB-Record DataSourceConfig
import org.springframework.dao.DeadlockLoserDataAccessException;
import org.springframework.jdbc.core.JdbcTemplate;

/*
 * DataSourceConfig file add this to create RetryJdbcTemplate object into spring context.
@Service
public class YourService {
	@Autowired JdbcTemplate      jdbcTemplate;
	@Autowired RetryJdbcTemplate jdbcTemplate;
}
 */
public class RetryJdbcTemplate extends JdbcTemplate {
    private static final int MAX_RETRIES = 3;
    private static final long WAIT_TIME_MS = 100; // Adjust as needed
    
    public RetryJdbcTemplate() {
        super();
    }

    public RetryJdbcTemplate(javax.sql.DataSource dataSource) {
        super(dataSource);
    }

    @Override
    public int update(String sql) {
        return doWithRetry(() -> super.update(sql));
    }

    @Override
    public int update(String sql, Object... args) {
        return doWithRetry(() -> super.update(sql, args));
    }

    private int doWithRetry(Operation operation) {
        int retries = 0;
        while (true) {
            try {
                return operation.execute();
            } catch (DeadlockLoserDataAccessException e) {
            	System.err.println("Retry :"+ e.getLocalizedMessage());
            	//e.printStackTrace();
                if (retries < MAX_RETRIES) {
                    retries++;
                    try {
                        Thread.sleep(WAIT_TIME_MS);
                    } catch (InterruptedException ignored) {
                        Thread.currentThread().interrupt();
                    }
                } else {
                    throw e; // If retries exceeded, rethrow the exception
                }
            }
        }
    }
    private interface Operation {
        int execute();
    }
}

DataBase - DataSourceConfig.getDataSourceHikariConfig()
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(
  entityManagerFactoryRef = "rdbmsEntityManager",
  transactionManagerRef = "transactionManager",
  basePackages = { "com.github.myworld.repositories" }
)
@ConditionalOnProperty(value = "datasourceType", havingValue = "RDBMS", matchIfMissing = false)
public class DataSourceConfig2 {
    
    static String packagesToScan = "com.github.myworld.entities";
    static boolean useL2Cache = false, useQueryCache = false;
    
    @Autowired
    Environment env;

//    @Bean
//    @ConfigurationProperties(prefix = "app.datasource.myworld")
//    public DataSourceProperties getDataSourceProperties() {
//        //spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL8Dialect
//        return new DataSourceProperties();
//    }
    
    /*
app.datasource.myworld.url=jdbc:mysql://localhost:3306/flywaydb?createDatabaseIfNotExist=true
app.datasource.myworld.username=root
app.datasource.myworld.password=root
app.datasource.myworld.driver-class-name=com.mysql.cj.jdbc.Driver
     */
    @Bean(name = "hikariDataSource")
    public DataSource getDataSourceHikariConfig() {
        String propertiesPrefix = "app.datasource.myworld.";
        String jdbcURL         = env.getProperty(propertiesPrefix+"url",               String.class);
        String driverClassName = env.getProperty(propertiesPrefix+"driver-class-name", String.class);
        String username = env.getProperty(propertiesPrefix+"username",                 String.class);
        String password = env.getProperty(propertiesPrefix+"password",                 String.class);
        
        com.zaxxer.hikari.HikariConfig hikariConfig = new com.zaxxer.hikari.HikariConfig();
        hikariConfig.setJdbcUrl( jdbcURL );
        hikariConfig.setDriverClassName( driverClassName );
        hikariConfig.setUsername( username );
        hikariConfig.setPassword( password );
        
        hikariConfig.setConnectionTimeout(10000L); 
        hikariConfig.setIdleTimeout(10000L); 
        hikariConfig.setMaxLifetime(1000L); 
        hikariConfig.setKeepaliveTime(1000L);
        hikariConfig.setMaximumPoolSize(100);
        hikariConfig.setMinimumIdle(30);
        
        return new com.zaxxer.hikari.HikariDataSource( hikariConfig );
        // return getDataSourceProperties().initializeDataSourceBuilder().type(HikariDataSource.class).build();
    }

    @Bean
    public LocalContainerEntityManagerFactoryBean rdbmsEntityManager() { // entityManagerFactoryRef = "rdbmsEntityManager"
        LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
        em.setDataSource(     getDataSourceHikariConfig() );
        em.setPackagesToScan( packagesToScan );
        
        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
        em.setJpaVendorAdapter(vendorAdapter);
        
        HashMap<String, Object> properties = new HashMap<>();
        //properties.put("hibernate.allow_update_outside_transaction",true);
        properties.put("hibernate.hbm2ddl.auto", env.getProperty("hibernate.hbm2ddl.auto"));
        properties.put("hibernate.dialect", env.getProperty("hibernate.dialect"));
        
        // With the following two properties, we’ll tell Hibernate that L2 caching is enabled, and give it the name of the region factory class
        if ( useL2Cache ) { // https://www.baeldung.com/hibernate-second-level-cache
            properties.put("hibernate.cache.use_second_level_cache", true);
            properties.put("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.EhCacheRegionFactory");// Impl of RegionFactory
            
            if ( useQueryCache ) {
                properties.put("hibernate.cache.use_query_cache", true);
            }
        }
        
        em.setJpaPropertyMap(properties);
        return em;
    }
    @Bean
    public PlatformTransactionManager transactionManager() { // transactionManagerRef = "transactionManager"
        JpaTransactionManager transactionManager = new JpaTransactionManager();
        transactionManager.setEntityManagerFactory( rdbmsEntityManager().getObject() );
        return transactionManager;
    }
    
    // JDBC style - Template's provide implementation of JdbcOperations
    @Bean
    public JdbcTemplate jdbcTemplate() {
        return new JdbcTemplate( getDataSourceHikariConfig() );
    }
    
    @Bean
    public NamedParameterJdbcTemplate namedParameterJdbcTemplate() {
        return new NamedParameterJdbcTemplate( getDataSourceHikariConfig() );
    }
}
DataBase - DataSourceConfig.getHikariCP( timing out Config )
/*
* The issue with HikariCP timing out and potential memory leaks when migrating from Java 8 to Java 17 can often be traced to configuration mismatches,
* changes in Java’s garbage collection and memory management, or compatibility issues with dependencies. By reviewing and adjusting configurations,
* updating libraries, and diagnosing memory usage, you can address these issues effectively.
*/
//@Bean(name = "dataSource")
public DataSource getHikariCP() { // Configuration Adjustments for HikariCP
    HikariConfig config = new HikariConfig();
    config.setJdbcUrl("jdbc:mysql://localhost:3306/dspro?createDatabaseIfNotExist=true"); //("jdbc:mysql://localhost:3306/mydatabase");
    config.setDriverClassName("com.mysql.cj.jdbc.Driver");
    config.setUsername("root"); //("user");
    config.setPassword("root"); //("password");
    
    config.setMaximumPoolSize(10); // Adjust based on your needs      100   / 10
    config.setConnectionTimeout(30000); // Increase timeout if needed 10000 / 30000
    config.setIdleTimeout(600000); // Adjust idle timeout             10000 / 600000
    config.setMaxLifetime(1800000); // Adjust max lifetime            1000  / 1800000
    config.setKeepaliveTime(1000); // MaxLifetime                     1000
    config.setMinimumIdle(30);
    
    HikariDataSource dataSource = new HikariDataSource(config);
    return dataSource;
}

⚠️ **GitHub.com Fallback** ⚠️