I am pretty hyped for Project Loom, but there is one thing that I can't fully understand.
Most Java servers use thread pools with a certain limit of threads (200, 300..), however you are not limited by the OS to spawn many more, I've read that with special configurations for Linux you can reach huge numbers.
OS threads are more expensive and they are slower to start/stop, have to deal with context switching (magnified by their number) and you are dependent on the OS which might refuse to give you more threads.
Having said that virtual threads also consume similar amounts of memory (or at least that is what I understood). With Loom we get tail-call optimizations which should reduce the memory usage. Also synchronization and thread context copy should still be a problem of a similar size.
Indeed you are able to spawn millions of Virtual Threads
public static void main(String[] args) {
for (int i = 0; i < 1_000_000; i++) {
Thread.startVirtualThread(() -> {
try {
Thread.sleep(1000);
} catch (Exception e) {
e.printStackTrace();
}
});
}
}
the code above breaks at around 25k with OOM exception when I use Platform threads.
My question is what exactly makes these threads so light, what is preventing us from spawning 1 million platform threads and work with them, is it only the context switching that makes the regular threads so "heavy".
One very similar question
Things I found so far:
One big advantage of coroutines (so virtual threads) is that they can generate high levels of concurrency without the drawback of callbacks.
let me first introduce Little's Law:
concurrency = arrival_rate * latency
And we can rewrite this to:
arrival_rate = concurrency/latency
In a stable system, the arrival rate equals throughput.
throughput = concurrency/latency
To increase throughput, you have 2 options:
With regular threads, it is difficult to reach high levels of concurrency with blocking calls due to context switch overhead. Requests can be issued asynchronously in some cases (eg NIO + Epoll or Netty io_uring binding), but then you need to deal with callbacks and callback hell.
With a virtual thread, the request can be issued asynchronously and park the virtual thread and schedule another virtual thread. Once the response is received, the virtual thread is rescheduled and this is done completely transparently. The programming model is much more intuitive than using classic threads and callbacks.
Fundamentally any implementation of a thread, either lightweight or heavyweight, depends on two constructs
There are two task types for threads
Concurrency is about OS scheduler and having non blocking IO task on threads life cycle(which is different than parallelism). I/O bound programs are the opposite of CPU bound programs. Such programs spend most of their time waiting for input or output operations to complete while the CPU sits idle. I/O operations can consist of operations that write or read from main memory or network interfaces.
NIO programing paradigm is one of the first topics that comes to mind once we talking about concurrency(which is there from July 2011 with JDK 7 partially and fully introduced with in JDK 8), so with multi threading and managing lifecycle of threads that pulled from thread pool and having proper callbacks roughly saying we can achieve that.
But on the other hand JVM threads (both, Daemon and User based) are wrapped upon OS threads and OS threads are expensive resources, in which we have limitation on applicable number of them. here comes the virtual threads or project loom practice.
In the recent prototypes in OpenJDK, a new class named Fiber is introduced to the library alongside the Thread class. Since the planned library for Fibers is similar to Thread, the user implementation should also remain similar. However, there are two main differences:
You may find more in here .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.