LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Java Program Taking Up 40GB+ of RAM (https://www.linuxquestions.org/questions/slackware-14/java-program-taking-up-40gb-of-ram-4175692592/)

bassmadrigal 03-24-2021 03:56 PM

Java Program Taking Up 40GB+ of RAM
 
1 Attachment(s)
I am using filebot on my 14.2 install with jdk (v8) from SBo to move some files around on my system and it is currently taking 55.0G of VIRT and 45.1G of RES according to htop. This happens regularly with this program (and I remember it happening with another java program, but I can't remember what it was). When it gets this big, I usually just close it and reopen it and it will reset the memory usage, but it always will continue to balloon up until it requires closing and reopening.

pmap -x shows the Kbytes as 54G (57693260K) and the RSS and Dirty at 44G (47217524K and 47208756K, respectively). The java program has spawned 83 threads. I have 64GB of RAM installed, and it seems that filebot will use up to a percentage of that, because I don't recall it ever going over my installed RAM amount, even when my system was running 32GB of RAM.

I've attached the output of the pmap -x command, in case it is helpful for anyone.

I know very little about java, but researching online lead me to add -Xmx512m to the launch command to try and curtail this memory usage, yet it had no effect (at least not that I noticed).

Any suggestions on what I can do to change this behavior? I'll keep filebot open for some time in case anyone wants me to run commands against it.

yngvarr 03-24-2021 05:39 PM

Sounds like a memory leak. Maybe I'm blind, but I can't find Java version requirements anywhere. Maybe there is a Java version mismatch. Linked slackbuild is quite old (22.3.2017). Have you tried newer version? According to their forum latest is 4.9.3 from 16.3. Anyway, best bet I would say is to contact developer through the mentioned forum.

bassmadrigal 03-24-2021 06:49 PM

Quote:

Originally Posted by yngvarr (Post 6233823)
Sounds like a memory leak. Maybe I'm blind, but I can't find Java version requirements anywhere. Maybe there is a Java version mismatch. Linked slackbuild is quite old (22.3.2017). Have you tried newer version? According to their forum latest is 4.9.3 from 16.3. Anyway, best bet I would say is to contact developer through the mentioned forum.

This isn't just an issue with filebot, as I've had the issue with other java programs on my system, I just can't remember what. The filebot version I'm using is the latest one before the developer put up a paywall (I'm actually the maintainer for the version on SBo). All versions from 4.8.0 and up require a license.

It seems that the java on my system has no limit for the amount of memory it can use. It ends up quite frustrating when a single program takes up 10s of GBs of RAM.

BrunoLafleur 03-25-2021 10:33 AM

Quote:

Originally Posted by bassmadrigal (Post 6233833)
This isn't just an issue with filebot, as I've had the issue with other java programs on my system, I just can't remember what. The filebot version I'm using is the latest one before the developer put up a paywall (I'm actually the maintainer for the version on SBo). All versions from 4.8.0 and up require a license.

It seems that the java on my system has no limit for the amount of memory it can use. It ends up quite frustrating when a single program takes up 10s of GBs of RAM.

Java is garbage collected and can eat a lot of memory.

For filebot I see you have a lot of threads. May be you can limit them with -Xss option ?

Martinus2u 03-25-2021 10:50 AM

Is it a pure java application or does it have other components (e.g. native libraries)? It would be surprising if the Java VM violated the -Xmx option by that much.

There is a (cryptic) way of monitoring the memory consumption of the VM:

https://docs.oracle.com/javase/8/doc...ldescr007.html

Maybe it helps...

bassmadrigal 03-25-2021 01:10 PM

Quote:

Originally Posted by BrunoLafleur (Post 6234066)
For filebot I see you have a lot of threads. May be you can limit them with -Xss option ?

I wasn't sure how to use that option, so I tried various numbers with errors (Invalid thread stack size: -Xss8) and eventually realized it needs a k at the end. Once I added the k, it said the lowest it can accept is -Xss228k.

Code:

The stack size specified is too small, Specify at least 228k
When using that, I checked again and it was at 76 threads and still a large amount of memory usage. With a single move operation (probably around 20 files), virtual memory went up to 10GB and RES memory went to 8.5GB.

I also started up Sweet Home 3D and in just placing some objects on a blank plan, Virtual memory shot up to 10.4GB.

As another check, I started filebot using jdk11 instead of jdk8 and I'm seeing very similar memory usage. In moving 100 files, it got up to 21.2G VIRT and 14.8G RES per htop. So the memory usage doesn't seemed to be tied to the jdk version.

willkane 03-25-2021 02:04 PM

Take a look at this:

Restrict size of buffer cache in Linux

and start working around the idea of how the Linux kernel manages cache memory.

If you copy really big files with 'cp' or even use 'diff' to compare big files, then you'll see how the RAM is consumed really fast and what once was in RAM is now moved to swap.

BrunoLafleur 03-25-2021 05:12 PM

Quote:

Originally Posted by willkane (Post 6234138)
Take a look at this:

Restrict size of buffer cache in Linux

and start working around the idea of how the Linux kernel manages cache memory.

If you copy really big files with 'cp' or even use 'diff' to compare big files, then you'll see how the RAM is consumed really fast and what once was in RAM is now moved to swap.

cp on 64 bits systems is allocating big buffers (up to 2Gbytes). They are only partially mapped to physical memory.
May be your Java application is allocating big buffers for nothing. Also having a lot of threads is probably not very efficient here in term of allocated virtual memory.

jostber 03-25-2021 11:10 PM

Check if there is a memory leak in the program with JConsole?

https://www.cleantutorials.com/jcons...h-example-code

bassmadrigal 03-26-2021 12:28 AM

This is my ps aux output for the process so the numbers can be seen:

Code:

USER      PID %CPU %MEM    VSZ      RSS    TTY  STAT START  TIME COMMAND
jbhansen 10492 10.5 23.3 22463776 15391376 pts/6  Sl+ 22:49  3:21 java -Dunixfs=true -Dapplication.update=skip -DuseGVFS=true -DuseExtendedFileAttributes=true -DuseCreationDate=false -Djava.net.useSystemProxies=true -Djna.nosys=false -Djna.nounpack=true -Dapplication.deployment=deb -Dnet.filebot.gio.GVFS=/gvfs -Dapplication.dir=/home/jbhansen/.filebot -Djava.io.tmpdir=/home/jbhansen/.filebot/temp -Dnet.filebot.AcoustID.fpcalc=/home/jbhansen/slackbuilds/multimedia/filebot-source/new-server/filebot-4.7.9/dist//fpcalc -Xmx512m -Xss228k -XX:NativeMemoryTracking=summary -jar /home/jbhansen/slackbuilds/multimedia/filebot-source/new-server/filebot-4.7.9/dist//FileBot_4.7.9.jar

Quote:

Originally Posted by Martinus2u (Post 6234073)
Is it a pure java application or does it have other components (e.g. native libraries)? It would be surprising if the Java VM violated the -Xmx option by that much.

It does use fpcalc for music fingerprinting and libmediainfo for reading attributes of media files, however, my memory usage stays relatively level during those operations. It seems my increase is tied mainly to file moving operations.

Quote:

Originally Posted by Martinus2u (Post 6234073)
There is a (cryptic) way of monitoring the memory consumption of the VM:

https://docs.oracle.com/javase/8/doc...ldescr007.html

Maybe it helps...

Here's the output after tracking it during a copy process:

Code:

jbhansen@craven-moorhead:~$ jcmd 10492 VM.native_memory summary   
10492:

Native Memory Tracking:

Total: reserved=1993293KB, committed=735269KB
-                Java Heap (reserved=524288KB, committed=524288KB)
                            (mmap: reserved=524288KB, committed=524288KB)
 
-                    Class (reserved=1113292KB, committed=71588KB)
                            (classes #8847)
                            (malloc=15564KB #12465)
                            (mmap: reserved=1097728KB, committed=56024KB)
 
-                    Thread (reserved=34879KB, committed=34879KB)
                            (thread #58)
                            (stack: reserved=34596KB, committed=34596KB)
                            (malloc=184KB #300)
                            (arena=99KB #102)
 
-                      Code (reserved=255607KB, committed=39287KB)
                            (malloc=6007KB #10466)
                            (mmap: reserved=249600KB, committed=33280KB)
 
-                        GC (reserved=35314KB, committed=35314KB)
                            (malloc=16154KB #301)
                            (mmap: reserved=19160KB, committed=19160KB)
 
-                  Compiler (reserved=207KB, committed=207KB)
                            (malloc=76KB #523)
                            (arena=131KB #15)
 
-                  Internal (reserved=16126KB, committed=16126KB)
                            (malloc=16094KB #12203)
                            (mmap: reserved=32KB, committed=32KB)
 
-                    Symbol (reserved=11606KB, committed=11606KB)
                            (malloc=8082KB #64853)
                            (arena=3524KB #1)
 
-    Native Memory Tracking (reserved=1596KB, committed=1596KB)
                            (malloc=10KB #121)
                            (tracking overhead=1586KB)
 
-              Arena Chunk (reserved=379KB, committed=379KB)
                            (malloc=379KB)

Overall, it looks to be about 2GB taken according to this, but htop is showing 21.4G VIRT and 14.7 RES.

Quote:

Originally Posted by willkane (Post 6234138)
If you copy really big files with 'cp' or even use 'diff' to compare big files, then you'll see how the RAM is consumed really fast and what once was in RAM is now moved to swap.

Quote:

Originally Posted by BrunoLafleur (Post 6234188)
cp on 64 bits systems is allocating big buffers (up to 2Gbytes). They are only partially mapped to physical memory.
May be your Java application is allocating big buffers for nothing. Also having a lot of threads is probably not very efficient here in term of allocated virtual memory.

This could be a pointer... the memory usage goes up really fast when it's moving files. One of the other java programs that was using a ton of RAM was downloading many small files and saving them. So the issues could be tied to file operations and possibly the memory used during file operations isn't being released.

I actually logged the VIRT (VSZ) amount and RES utilization (%) during my last moving operation (moving 100 files) and made a graph (thanks to this answer on StackOverflow). You can see it stays pretty level until the copy operations start.

Quote:

Originally Posted by jostber (Post 6234275)
Check if there is a memory leak in the program with JConsole?

https://www.cleantutorials.com/jcons...h-example-code

Looking through jconsole after attaching it to the process doesn't seem to indicate any major issues. The only thing that I can see is it showing "Committed virtual memory" as 22.4GB, but it shows the current heap size as only 134MB (with the max being 492MB).

I took a few screenshots, in case you're any better at reading the output.

Overview Tab
Memory Tab - Interesting to note that after the file operations stopped, the memory usage dropped, but has been slowly rising since (still only 152MB). If I run garbage collection, the memory usage shown goes down to >50MB, but the amount show in htop doesn't change.
VM Summary Tab - This is what shows the large "Committed virtual memory" amount of 22.4GB.

Thanks for the suggestions everyone!

phenixia2003 03-26-2021 03:37 AM

Hello,

I can be wrong but I guess that filebot uses java nio API caches which can lead to native (off-heap) memory leak.

To workaround this, you can bound the size of buffers cached in the per-thread buffer caches with property jdk.nio.maxCachedBufferSize. So, try to add -Djdk.nio.maxCachedBufferSize=262144 to the java command line.

Hope this helps.

--
SeB

Martinus2u 03-26-2021 05:24 AM

Quote:

Originally Posted by bassmadrigal (Post 6234287)
Here's the output after tracking it during a copy process:

Looks like the JVM itself is off the hook with a heap size of 512 MB as configured. The rest is peanuts.

Let's hope phenixia2003 found the solution...

BrunoLafleur 03-26-2021 08:00 AM

Quote:

Originally Posted by phenixia2003 (Post 6234309)
Hello,

I can be wrong but I guess that filebot uses java nio API caches which can lead to native (off-heap) memory leak.

To workaround this, you can bound the size of buffers cached in the per-thread buffer caches with property jdk.nio.maxCachedBufferSize. So, try to add -Djdk.nio.maxCachedBufferSize=262144 to the java command line.

Hope this helps.

--
SeB

This article explain the problem :
https://dzone.com/articles/troublesh...-off-heap-memo
It is not a memory leak. The buffers are kept for reuse. But it can be very memory consuming. The parameter in the quoted post can manage the usage of those buffers.

In allocators there is always the problem whether it should keep buffers for reuse later or release memory to the system when we don't need it anymore. In the latest kernels the mmap interface add a MADV_FREE flag to release to the system only when there is some pressure of memory. Else if the memory has not been given back, it can be reused directly by the application which release it. Some memory allocator like jemalloc use that new keyword.

Jeebizz 03-26-2021 08:28 AM

The question is who's fault is this? The actual java program or the java VM? I remember how java is touted for its "memory clean up" or whatever. Sorry that I don't have anything actually useful to contribute, it is the morning - and when it comes to java, I like to shit on it every chance I get; because I just never liked java to begin with.

"B-b-b-ut muh garbage collecsthun."

BrunoLafleur 03-26-2021 09:08 AM

Quote:

Originally Posted by Jeebizz (Post 6234375)
The question is who's fault is this? The actual java program or the java VM? I remember how java is touted for its "memory clean up" or whatever. Sorry that I don't have anything actually useful to contribute, it is the morning - and when it comes to java, I like to shit on it every chance I get; because I just never liked java to begin with.

"B-b-b-ut muh garbage collecsthun."

It's not a fault but applications and system are using a lot caches and buffers. Virtual memory is not always mapped to physical memory and 64 bit system don't try to do be smart. 32 bits apps are less tolerant.

For file copying adding big buffers as usually done is not very smart as the file are already page copied in memory. So a loop with a small buffer should be better. I think there is some bad habits to use big buffers for copying datas. Streaming and system paging would be better.


All times are GMT -5. The time now is 09:22 AM.