Java Memory TroubleShooting - ashishranjandev/developer-wiki GitHub Wiki
- Ships with the JVM
- Can list running JVMs
- Command line Tool

- Ships with the VM
- Very versatile tool
- GUI


A listing of how much memory is used by different types of objects
- Snapshot: What is in the heap currently
- Quick: Very fast to collect
- Low Detail: Doesn't list cause or object relationship
jmap -histo:[live,] <pid>

- [B is byte Array
- [C is char Array
- [Ljava.lang.Object; is Object Array
A copy of all live objects in the application's heap
- Snapshot - What is in the heap currently
- Slow and Large - Requires space to store and time to take the dump
- High Detail - Object Relationships and per object memory
jmap -dump:[live,] file=/path/to/dump <pid>



-XX :+HeapDumpOnOutOfMemoryError -XX :HeapDumpPath=/path/to/dump
Performs a heap dump when you run out of memory(or spends too long performing GC) Can be useful but can delay restart times
- Free Heap Dump Analyzer - Eclipse MAT - http://eclipse.org/mat
- Other Tools Usable JVisualVM, jhat
Previous two are snapshot driven analysis. Memory profiling is different.
- Event Driven (Pattern of activity over time)
- See patterns of activity
- Records memory allocations and related information
JVusualVM | Java Mission Control |
---|---|
Free, Open Source Tool | Free for use in Development |
Ships with JDK | Ships with Java 8+ |
Includes a Memory Profiler | Includes a Memory Profiler |
Can disable escape analaysis | More Accurate |


Note: Memory profiling should be avoided in production server as it slows down the process.
A memory leak occurs when memory that has been allocated and is no longer needed does not get released.
Retaining a reference
private Object obj;
public void foo() {
obj = new Object();
}
In the above case GC would not free the memory.
How to find out?
- Retained Heap: Find objects that cause the most heap to be retained
- Filter: Top objects will usually be noise
- Investigate: Look at what these objects are referencing or being referenced by

If the memory is having saw tooth pattern and the consumption is increasing with time there is a memory leak.


We take a heap dump and open in Eclipse Memory Analyszer. Cache can often be source of memory leaks.
Periodically GC Runs.
Age of Object - Number of GC cycles an object has survived. Generations Count - Number of different ages of objects that are surviving for a given class.
Classes with increasing generations counts are leak candidates. This means objects of this class is being created but not being collected.

- ClassLoaders - Leaks caused by Java's ClassLoader mechanism
- ThreadLocals - ThreadLocal variables
- OffHeap Memory - Directly allocated memory
Class Loader - A Class Loader is mechanism for dynamically loading Java classes, as raw bytecode, into a JVM.
Why are class loaders used? All classes are loaded by a class loader. The JVM has 3 built in classes loaders - bootstrap, extension and system. User defined class loaders are commonly used as an extension mechanism. For example in loading plugins or servlet containers.
If the class being loaded by classloader is being referenced somewhere(e.g. logger) it will not get cleaned up and every time we call it the class would be present in the memory.
A field, where each thread has its own independently initialized copy of the variable
Theadlocals are automatically cleaned when thread exits but if we reuse the thread for another action then there is a risk of a leak if we don't remove the value.
Heap | Off-Heap |
---|---|
Managed by JVM/GC | Managed Manually |
Arena allocated regions | Individually allocated buffers |
Java Objects allocated here | Custom Data Stores |
Types of Off-Heap Memory
- Native Code - JNI Invoked native libraries. Out of Scope.
- Direct Buffeers - Off Heap buffers allocated by Java Buffer API. Used as buffers for writing to Networks or Files.
- Memory mapped files - Used for interprocess communications.
This storage won't appear as retained heap. Java process might take more memory than what is mentioned in -Xmx as it only applies to heap memory.
Tools
top -p <pid>
- Enforce buffer allocation limit
-XX:MaxDirectMemorySize=1g
- Stops from hitting unbounded buffer growth.
- Still need to find leaks
- We can use Mbeans(JDK 7+) to introspect the actual byte buffer memory mapped file memory consumption.
To View Buffer Pool Consumption
We can install plugin - JVisualVM-BufferMonitor in JVisualVM


Two Reasons
Memory grows with activity over time Don't free what's allocated
Grows with currently active work Simply using too much memory
Causes:
- Memory has a finite limit - either using -Xmx or JVM would take quarter of system's memory
- Application needs more is available - Inefficiency or lack of RAM Grow
- Growing in relation to current load - Can go down as well as up e.g. concurrent sessions, loading all input data in memory, etc
How to Resolve?
- Identify what's using your memory
- Reduce memory Consumption - Allocate less, don't reference much
- Measure again to validate
Tips
- do not load the whole resultset of large input in memory
- Use Primitive Numberic types over Boxed numeric types. The primitives are faster and use less memory.
- Recalculate instead of Storing - Caches takes memory so it can consume lot of memory
- Simplify Domain Model - Abstractions and Complexity can add Overhead
- Increase Available Memory -Xmx4G (Careful about RAM size as it could lead to swapping)
- Don't hold objects in memory that you don't need.
- Monitoring - GC logs/ JVisulaVM/ Free APM (Prometheus)/ Paid APM - Can alert also
- Do Performance Testing with same hardware as prod - How many concurrent users, size of input
- Speculatively pick at low hanging fruit
Myth - Allocating Objects is Free
2 costs are involved
-
CPU Cache locality - Allocating Objects reduce the effectiveness of your CPU's Cache. RAM Speed is not being able to keep up with CPU advancements hence caching in CPU cache is important. Modern CPUs have multilayered caches Allocating objects writes to memory, washing the cache Effect ameliorated by LRU Caches
-
Time Spent Allocating - Actually allocating objects takes time in and of itself
Allocation in GC'd systems is very fast But it is not free Different Collectors have different costs
- Eg: Parallel Collector is faster that G1
Execution Profiles won't help
- Sampling happens at "safepoints" Application never sampled during allocation
- Profiles an slow down application
- Execution Profiles tell you about code Don't even have a way of referencing allocation rates
Memory Profiles can help
- Measure - Identify allocation hotspots
- Coorelate - See if they happen in execution Hotspots
- Optimize - Reduce memory allocation with hotspots
Stop the world GC cycle or Full GC hampers Latency
General assumption of GC
- Most Objects Die Young - Based on research over lifetime
- Split Memory into Generations - Collect younger generations more frequently
GC Pauses don't directly correlate to allocation rates. It depends on type of GC
ThroughPutCollector or Parallel Collector - Default till JDK8
Young Generation
- Copies live objects to next generation
- Costs scale with size of live dataset Old Generation
- Compacts objects down to contiguous memory
- Costs scale with size of heap
GC Frequency based on generational region being full.
G1 Collectors - Default from JDK8
Splits heaps into may regions Young Gen
- All collected every pause Old Gen
- Split collection up over different Young Gen Pauses GC can't necessarily keep up with too much allocation
How frequent allocation hurts latency? Fill up young gen - more frequent GC Pauses Premature Promotion - Objects get promoted to old Gen More frequent Full GCs - Eventually Old Gen is also full resulting in longer GC Pauses
Tip - Instead of using Map prefer using primitive arrays.