简体   繁体   English

node.js 中 setInterval() 的奇怪行为(仅限 Windows,适用于 linux)

[英]Odd behavior with setInterval() in node.js (windows only, works in linux)

I have been having some strange issues with node.js using windows machines which does not occur on linux machines.我在使用 windows 机器的 node.js 中遇到了一些奇怪的问题,这在 linux 机器上不会发生。

When using the setInterval() function at low delay settings (< 100ms): Essentially, the provided function is called less often than I would expect, in a way that is very consistent.在低延迟设置(< 100 毫秒)下使用 setInterval() function 时:本质上,所提供的 function 的调用频率比我预期的要少,方式非常一致。 When using the example code I quickly wrote up below, windows machines seem to give consistently different results than linux machines, even when running on the same hardware.当使用我在下面快速写下的示例代码时,windows 机器似乎始终给出与 linux 机器不同的结果,即使在相同的硬件上运行也是如此。

//should be time in MS between executions
var delay = 10;

//how many seconds to run the test
var testPeriods = 10;

var counter = 0;
var startTime = new Date().getTime();

var interval = setInterval(() => {
    ++counter;
    if (new Date().getTime() >= startTime+1000*testPeriods) {
        console.log('Mean Function Calls per Second:', counter/testPeriods);
        clearInterval(interval);
    }
}, delay);

Some test data, from my pc which dual-boots linux and windows:一些测试数据,来自我的双引导 linux 和 windows 的电脑:

Delay (ms) | Expected | Linux result | Windows result
-----------+----------+--------------+---------------
       100 |       10 |         10.0 |            9.2
        50 |       20 |         20.0 |           16.0
        25 |       40 |         39.8 |           32.0
        10 |      100 |         98.4 |           63.8

You'll notice that the linux results match almost perfectly until the lower delay settings, and even they it isn't that far off.您会注意到 linux 结果几乎完全匹配,直到较低的延迟设置,即使它们也相差不远。 The windows results, on the other hand, fall well short.另一方面,windows 结果远远不够。

I figure it's likely that the windows version of node is poorly optimized compared to the linux version.我认为与 linux 版本相比,节点的 windows 版本的优化可能很差。 So at first, I assumed that the function I was providing simply took too long to execute, thereby delaying the next execution.所以起初,我假设我提供的 function 执行时间太长,从而延迟了下一次执行。 However, this does not seem to be the case.然而,情况似乎并非如此。 After all, if I assume the provided function takes similar time to execute no matter what, then I know the windows machine can execute it up to ~63 times a second at the lowest delay setting.毕竟,如果我假设所提供的 function 无论如何都需要类似的时间来执行,那么我知道 windows 机器可以在最低延迟设置下每秒最多执行约 63 次。 So why then does it only execute ~32 times when it should be doing it ~40 times (with delay @ 25ms)?那么为什么它只执行~32次,而它应该执行~40次(延迟@ 25ms)?

If anybody could give me some insight into why this is happening or what I am doing wrong, that would be much appreciated.如果有人可以让我了解为什么会发生这种情况或我做错了什么,那将不胜感激。

EDIT: Simplified code at the suggestion of @jfriend00, and updated test results to match.编辑:在@jfriend00 的建议下简化了代码,并更新了测试结果以匹配。

How quickly a piece of scheduled code will be called depends on your OS'es timer interrupt interval.一段预定代码的调用速度取决于您的操作系统的定时器中断间隔。 Depending on the OS this setting is known as the tick or jiffy .根据操作系统,此设置称为tickjiffy On some OS it is configurable.在某些操作系统上,它是可配置的。

OSes will periodically run its own code to do checks like making processes take turns using the CPU, cleaning up memory etc. To do this the OS sets up a timer interrupt (actually it's like setInterval only at hardware level) to run its own code.操作系统将定期运行自己的代码来执行检查,例如让进程轮流使用 CPU、清理 memory 等。为此,操作系统设置了一个定时器中断(实际上它就像setInterval仅在硬件级别)来运行自己的代码。

This is actually how OSes manage to run more processes/threads than cores available and it is this mechanism that also drives setInterval and setTimeout in javascript at the OS level.这实际上是操作系统如何设法运行比可用内核更多的进程/线程,并且正是这种机制在操作系统级别驱动 javascript 中的setIntervalsetTimeout So basically both threads and setInterval use the same OS mechanism, the only difference is that threads can use several cores at once while setInterval can only use the same core as the main js thread.所以基本上线程和setInterval都使用相同的 OS 机制,唯一的区别是线程可以同时使用多个内核,而setInterval只能使用与主 js 线程相同的内核。

There is a trade-off to how often you set your tick .您设置刻度的频率需要权衡取舍。 Setting the tick very short will make your OS more real-time as the lag in event processing is greatly reduced.刻度设置得很短将使您的操作系统更加实时,因为事件处理的延迟大大减少。 However, this also means that your OS code uses a higher percentage of CPU time leaving less for your apps.但是,这也意味着您的操作系统代码使用更高百分比的 CPU 时间,而留给您的应用程序的时间更少。 Setting the tick longer will give your apps more CPU time thus giving you more throughput.刻度设置得更长将为您的应用程序提供更多的 CPU 时间,从而为您提供更多的吞吐量。

I'm actually a bit surprised by your results.我实际上对你的结果有点惊讶。 Years ago (early 2000s) the default jiffy setting on Linux was quite long as Linux was optimized more for server uses and thus optimized for throughput.几年前(2000 年代初),Linux 上的默认jiffy设置相当长,因为 Linux 针对服务器使用进行了更多优化,因此针对吞吐量进行了优化。 Windows on the other hand had a shorter tick as Windows was optimized more for real-time tasks such as playing games and running graphical apps.另一方面,Windows 的刻度较短,因为 Windows 针对玩游戏和运行图形应用程序等实时任务进行了更多优化。 Perhaps things have changed over the years.也许这些年来情况发生了变化。

So yes, if you want cross-platform consistency there is a minimum setInterval time that would work across OSes.所以是的,如果你想要跨平台的一致性,那么有一个最小的setInterval时间可以跨操作系统工作。 However I would have guessed that 10ms would be enough as that was my previous experience over the years (1ms would obviously show the different behavior of various OSes).但是我猜想 10 毫秒就足够了,因为这是我多年来的经验(1 毫秒显然会显示各种操作系统的不同行为)。 If 10ms does not work these days you can try a longer interval such as 50ms‡ or 100ms.如果这些天 10 毫秒不起作用,您可以尝试更长的时间间隔,例如 50 毫秒‡ 或 100 毫秒。

‡ note: 50ms or 20fps is the update interval of radio control transmitters so it is real-time enough for human reflexes to fly planes, helicopters and drones without crashing ‡ 注意:50ms 或 20fps 是无线电控制发射器的更新间隔,因此它是实时的,足以让人类反应驾驶飞机、直升机和无人机而不会坠毁

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM