简体   繁体   中英

Researching Javascript DEOPT reasons in Chrome 79 for a pet project

I've been tinkering with a Javascript chess engine for a while. Yeah yeah I know (chuckles), not the best platform for that sorta thing. It's a bit of a pet project, I'm enjoying the academic exercise and am intrigued by the challenge of approaching compiled language speeds. There are other quirky challenges in Javascript, like the lack of 64bit integers, that make it unfit for chess, but paradoxically interesting, too.

A while back I realized that it was extremely important to be careful with constructs, function parameters, etc. Everything matters in chess programming, but it seems that a lot matters when working with JIT compilers (V8 Turbofan) via Javascript in Chrome.

Via some traces, I'm seeing some eager DEOPTs that I'm having trouble figuring out how to avoid.

DEOPT eager, wrong map

The code that's referenced by the trace:

if (validMoves.length) { ...do some stuff... }

The trace points directly to the validMoves.length argument of the IF conditional. validMoves is only ever an empty array [] or an array of move objects [{Move},{Move},...]

Would an empty array [] kick off a DEOPT?

Incidentally, I have lots of lazy and soft DEOPTs, but if I understand correctly, these are not so crucial and just part of how V8 wraps its head around my code before ultimately optimizing it; in --trace-opt, the functions with soft,lazy DEOPTs, do seem to eventually be optimized by Turbofan, and perhaps don't hurt performance in the long run so much. (For that matter, the eager DEOPT'ed functions seem to eventually get reoptimized, too.) Is this a correct assessment?

Lastly, I have found at times that by breaking up functions that have shown DEOPTs, into multiple smaller function calls, I've had notable performance gains. From this I've inferred that the larger more complex functions are having trouble getting optimized and that by breaking them up, the smaller compartmentalized functions are being optimized and thus feeding my gains. Does that sound reasonable?

the lack of 64bit integers

Well, there are BigInts now :-) (But in most engines/cases they're not suitable for high-performance operations yet.)

Would an empty array [] kick off a DEOPT?

Generally no. There are, however, different internal representations of arrays, so that may or may not be what's going on there.

[lazy, soft, eager...] Is this a correct assessment?

Generally yes. Usually you don't have to worry about deopts, especially for long-running programs that experience a few deopts early on. This is true for all the adjectives that --trace-deopt reports -- those are all just internal details. ("eager" and "lazy" are direct opposites of each other and simply indicate whether the activation of the function that had to be deoptimized was top-of-stack or not. "soft" is a particular reason for a deopt, namely a lack of type feedback, and V8 choosing to deoptimize instead of generating "optimized" code despite lack of type feedback, which wouldn't be very optimized at all.)

There are very few cases where you, as a JavaScript developer, might want to care about deopts. One example is when you've encountered a case where the same deopt happens over and over again. That's a bug in V8 when it happens; these "deopt loops" are rare, but occasionally they do occur. If you have found such a case, please file a bug with repro instructions.

Another case is when every CPU cycle matters, especially during startup / in short-running applications, and some costly functions gets deoptimized for a reason that might be avoidable. That doesn't seem to be your case though.

[breaking up functions...] Does that sound reasonable?

Breaking up functions can be beneficial, yes; especially if the functions you started with were huge. Generally, functions of all sizes get optimized; obviously larger functions take longer to optimize. This is a tricky area with no simple answers; if functions are too small then that's not helpful for performance either. V8 will perform some inlining, but the decisions are based on heuristics that naturally aren't always perfect. In my experience, manually splitting functions can in particular pay off for long-running loops (where you'd put the loop into its own function).

EDIT: to elaborate on the last point as requested, here's an example: instead of

function big() {
  for (...) {
    // long-running loop
  }
  /* lots more stuff... */
}

You'd split it as:

function loop() {
  for (...) {
    // same loop as before
  }
}
function outer() {
  loop();
  /* same other stuff as before */
}

For a short loop, this is totally unnecessary, but if significant time is spent in the loop and the overall size of the function is large, then this split allows optimization to happen in more fine-grained chunks and with fewer ("soft") deopts.

And to be perfectly clear: I only recommend doing this if you are seeing a particular problem (eg: --trace-opt telling you that your biggest function is optimized two or more times, taking a second each time). Please don't walk away from reading this answer thinking "everyone should always split their functions", that's not at all what I'm saying. In extreme cases of huge functions, splitting them can be beneficial.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM