[英]How do I force my JVM process to always occupy x GB ram?
It is related to my previous question这与我之前的问题有关
I set Xms as 512M, Xmx as 6G for one java process.对于一个 java 进程,我将 Xms 设置为 512M,Xmx 设置为 6G。 I have three such processes.
我有三个这样的过程。
My total ram is 32 GB.我的总内存是 32 GB。 Out of that 2G is always occupied.
那个2G总是被占用。
I executed free
command to ensure that minimum 27G is free.我执行了
free
命令以确保至少 27G 是免费的。 But my jobs required only 18 GB max at any time.但我的工作在任何时候都只需要最大 18 GB。
It was running fine.它运行良好。 Each job occupied around 4 to 5 GB but used around 3 to 4 GB.
每个作业占用大约 4 到 5 GB,但使用大约 3 到 4 GB。 I understand that Xmx doesn't mean that process should always occupy 6 GB
我了解 Xmx 并不意味着该进程应始终占用 6 GB
When another X process started on the same server with another user, it has occupied 14G.当另一个X进程与另一个用户在同一台服务器上启动时,它已经占用了14G。 Then one of my process got failed.
然后我的一个过程失败了。
I understand that I need to increase ram or manage both collision jobs.我知道我需要增加 ram 或管理这两个碰撞工作。
Here the question is that how can I force my job to use 6 GB always and why does it throw GC limit reached error in this case?这里的问题是,我怎样才能强制我的工作始终使用 6 GB,为什么在这种情况下它会抛出 GC limit 达到错误?
I used visualvm to monitor them.我使用 visualvm 来监控它们。 And jstat also.
并且 jstat 也。
Any advises are welcome.欢迎任何建议。
Simple answer: -Xmx
is not a hard limit to JVM.简单的答案:
-Xmx
不是 JVM 的硬性限制。 It only limits the heap available to Java inside JVM.它仅限制 JVM 内的 Java 可用的堆。 Lower your
-Xmx
and you may stabilize process memory on a size that suits you.降低您的
-Xmx
并且您可以将进程 memory 稳定在适合您的大小。
Long answer: JVM is a complex machine.长答案: JVM 是一个复杂的机器。 Think of this like an OS for your Java code.
将此视为您的 Java 代码的操作系统。 The Virtual Machine does need extra memory for its own housekeeping (eg GC metadata), memory occupied by threads' stack size, "off-heap" memory (eg memory allocated by native code through JNI; buffers) etc.
The Virtual Machine does need extra memory for its own housekeeping (eg GC metadata), memory occupied by threads' stack size, "off-heap" memory (eg memory allocated by native code through JNI; buffers) etc.
-Xmx
only limits the heap size for objects: the memory that's dealt with directly in your Java code. -Xmx
仅限制对象的堆大小:在 Java 代码中直接处理的 memory。 Everything else is not accounted for by this setting.此设置不考虑其他所有内容。
There's a newer JVM setting -XX:MaxRam
( 1 , 2 ) that tries to keep the entire process memory within that limit.有一个较新的 JVM 设置
-XX:MaxRam
( 1 , 2 )试图将整个过程 memory 保持在该限制内。
From your other question:从你的另一个问题:
It is multi threading.
它是多线程。 100 reader, 100 writer threads.
100 个读者,100 个作者线程。 Each one has it's own connection to the database.
每个人都有自己的数据库连接。
Keep in mind that the OS' I/O buffers also need memory for their own function.请记住,操作系统的 I/O 缓冲区也需要 memory 用于它们自己的 function。
If you have over 200 threads, you also pay the price: N*(Stack size)
, and approx.如果你有超过 200 个线程,你还要付出代价:
N*(Stack size)
,大约。 N*(TLAB size)
reserved in Young Gen for each thread (dynamically resizable): N*(TLAB size)
在 Young Gen 中为每个线程保留(动态调整大小):
java -Xss1024k -XX:+PrintFlagsFinal 2> /dev/null | grep -i tlab
size_t MinTLABSize = 2048
intx ThreadStackSize = 1024
Approximately half a gigabyte just for this (and probably more)!大约半 GB 仅用于这个(可能更多)!
Thread Stack Size (in Kbytes).
线程堆栈大小(以千字节为单位)。 (0 means use default stack size) [Sparc: 512;
(0 表示使用默认堆栈大小) [Sparc: 512; Solaris x86: 320 (was 256 prior in 5.0 and earlier);
Solaris x86:320(在 5.0 及更早版本中为 256); Sparc 64 bit: 1024;
Sparc 64位:1024; Linux amd64: 1024 (was 0 in 5.0 and earlier);
Linux amd64: 1024(在 5.0 及更早版本中为 0); all others 0.] - Java HotSpot VM Options ;
所有其他 0.] - Java HotSpot VM 选项; Linux x86 JDK source
Linux x86 JDK源码
In short: -Xss
(stack size) defaults depend on the VM and OS environment.简而言之:
-Xss
(堆栈大小)默认值取决于 VM 和 OS 环境。
Thread Local Allocation Buffers are more intricate and help against allocation contention/resource locking.线程本地分配缓冲区更加复杂,有助于防止分配争用/资源锁定。 Explanation of the setting here , for their function: TLAB allocation and TLABs and Heap Parsability .
这里的设置解释,针对他们的 function: TLAB allocation and TLABs and Heap Parsability 。
Further reading: "Native Memory Tracking" and Q: "Java using much more memory than heap size"进一步阅读: “本机 Memory 跟踪”和Q:“Java 使用比堆大小更多的 memory”
why does it throw GC limit reached error in this case.
为什么在这种情况下会抛出 GC limit达到错误。
"GC overhead limit exceeded" . “超出 GC 开销限制” 。 In short: each GC cycle reclaimed too little memory and the ergonomics decided to abort.
简而言之:每个 GC 循环回收的 memory 太少,人体工程学决定中止。 Your process needs more memory.
您的工艺需要更多 memory。
When another X process started on the same server with another user, it has occupied 14g.
当另一个X进程与另一个用户在同一台服务器上启动时,它已经占用了14g。 Then one of my process got failed.
然后我的一个过程失败了。
Another point on running multiple large memory processes back-to-back, consider this:关于背靠背运行多个大型 memory 进程的另一点,请考虑:
java -Xms28g -Xmx28g <...>;
# above process finishes
java -Xms28g -Xmx28g <...>; # crashes, cant allocate enough memory
When the first process finishes, your OS needs some time to zero out the memory deallocated by the ending process before it can give these physical memory regions to the second process.当第一个进程完成时,您的操作系统需要一些时间将结束进程释放的 memory 归零,然后才能将这些物理 memory 区域提供给第二个进程。 This task may need some time and until then you cannot start another "big" process that immediately asks for the full 28GB of heap (observed on WinNT 6.1).
此任务可能需要一些时间,在此之前您无法启动另一个立即要求完整 28GB 堆的“大”进程(在 WinNT 6.1 上观察到)。 This can be worked around with:
这可以解决:
-Xms
so the allocation happens later in 2nd processes' life-time-Xms
以便分配发生在第二个进程的生命周期的后期-Xmx
heap-Xmx
堆
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.