Troubleshooting - sepinf-inc/IPED GitHub Wiki

OutOfMemoryError

This kind of issue is often caused when files are parsed by third party libraries with bugs or that were not coded to deal with truncated or corrupted data, commonly recovered by forensic softwares.

Sometimes it is possible to workaround the OutOfMemoryError (OOME) by increasing the heap memory used by the application. Usually, the Java Virtual Machine by default allocates a maximum of 1/4 of your physical memory as heap memory. You can increase it adding, for example, -Xmx8G param after iped.exe to allow IPED to use a maximum of 8GB of RAM as heap. Try to use half the physical memory, do not allocate more than 30GB of heap.

In other cases, increasing the max/min heap size is not enough. A corrupted or malicious file can cause a vulnerable third party library to leak memory in an infinite loop, eventually using all heap memory available, no matter how much heap you have. The recommended solution is to turn on enableExternalParsing option inside conf/ParsingTaskConfig.txt. This option will transfer file parsing to external processes, isolating OOME problems from the main application. So only the external processes will crash, not the main process, and they will be restarted.

Paradoxically this option needs more physical memory. By default, each two processing threads will start one external parsing process with a maximum of 512MB of heap. The maximum heap size and number of external parsing processes can be adjusted with externalParsingMaxMem and numExternalParsers options.

In some very rare cases, it was observed that robustImageReading option, explained below, exhausted the native system memory (OutOfMemoryError: native memory allocation (malloc) failed or OutOfMemoryError: native memory exhausted) because of Sleuthkit memory leaks. If you get an error like that, try to disable that option.

If none of the solutions above work, possibly the memory leak was caused by an internal IPED module (this we can try to fix). Please, check if a *.hprof heap dump file was created in your current directory. If not, run the processing again with -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=/path/to/dump java options. Open an issue, attach the processing log and a link to the compressed heap dump, it will be too large to attach to the issue.

Processing frozen

It rarely occurs, because problematic modules already have timeout control. But, sometimes, new version of libraries have new bugs and very simple tasks, without timeout control, like checking file signatures, could freeze. It will be solved in the future when timeout control is generalized for all modules.

Please report the issue, taking a screenshot of the processing window and a thread dump of iped process, use jvisualvm (comes with oracle jdk8) or jstack (jstack -l <pid>) and attach them to the new issue.

For now, to workaround the problem, take the fullpath of the problematic file from the processing window and create a script in scripts/tasks/IgnoreFile.js (based on scripts/tasks/ExampleScriptTask.js) with a rule like below to ignore the file:

function process(item){
    if(item.getPath().indexOf(path_from_screenshot) != -1){
        item.setToIgnore(true);
    }
}

That script must be installed as the first task in conf/TaskInstaller.xml so other modules will not try to process the file. Starting with version 3.18, you can stop and resume processing from last commit point with --continue option. Older versions do not have resume support and you will need to restart the processing from beginning.

JVM Crashes

Rarely the java virtual machine may crash with a message like this:

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f0d05d43315, pid=2219, tid=0x00007f0d0423e700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b31) (build 1.8.0_131-b31)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b31 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V  [libjvm.so+0x5c5315]  G1ParScanThreadState::copy_to_survivor_space(InCSetState, oopDesc*, markOopDesc*)+0x45
#
# Core dump written. Default location: /aem/author core.2219
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp

When it happens a file named hs_err_pidXXX.log is created with error details in user current directory.

This error is usually caused by bugs in native libraries, most times in The Sleuthkit. It can be solved by enabling robustImageReading option in conf/FileSystemConfig.txt. It will open and use external processes to read image (dd/e01) contents. If sleuthkit crashes, only the auxiliary processes will crash, and will be restarted by the main process. You can adjust the number of external processes with numImageReaders option.

Note: A great side effect of enabling robustImageReading is that processing E01 images will be a lot faster, because file uncompression will be done in parallel by all external processes. Without it, sleuthkit must synchronize all reads from E01 images because libewf is not thread safe.

Finally, sometimes this error is caused by bugs in other native libraries used by file parsers (SevenZipParser, EdgeWebCacheParser, SQLiteParser). Turning enableExternalParsing in conf/ParsingTaskConfig.txt on should solve these cases.

⚠️ **GitHub.com Fallback** ⚠️