简体   繁体   中英

Why would you use GHC RTS option -V0?

According to the ghc runtime control doc, if you use the -V0 option the timer signal is disabled .

Can someone explain to me why would you do such a thing?

Eg: ghc +RTS -V0 -RTS -rtsopts -O2 -o Solution Solution.hs

The last paragraph from the docs hints at a possible reason: context switching becomes deterministic and that can help debugging.

-V ⟨secs⟩ Default: 0.02

Sets the interval that the RTS clock ticks at, which is also the sampling interval of the time and allocation profile. The default is 0.02 seconds. The runtime uses a single timer signal to count ticks; this timer signal is used to control the context switch timer (Using Concurrent Haskell) and the heap profiling timer RTS options for heap profiling. Also, the time profiler uses the RTS timer signal directly to record time profiling samples.

Normally, setting the -V ⟨secs⟩ option directly is not necessary: the resolution of the RTS timer is adjusted automatically if a short interval is requested with the -C ⟨s⟩ or -i ⟨secs⟩ options. However, setting -V ⟨secs⟩ is required in order to increase the resolution of the time profiler.

Using a value of zero disables the RTS clock completely, and has the effect of disabling timers that depend on it: the context switch timer and the heap profiling timer. Context switches will still happen, but deterministically and at a rate much faster than normal. Disabling the interval timer is useful for debugging, because it eliminates a source of non-determinism at runtime.

I guess using -V0 also makes SIGALRM / SIGVTALRM available to the application. Normally that is reserved by the run time system.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM