简体   繁体   English

GHC部分评估和单独编制

[英]GHC Partial Evaluation and Separate Compilation

Whole-program compilers like MLton create optimized binaries in part to their ability to use the total source of the binary to perform partial evaluation: aggressively inlining constants and evaluating them until stuck—all during compilation! MLton这样的全程序编译器创建优化的二进制文件,部分原因是它们能够使用二进制文件的全部源执行部分评估:积极地内联常数并对其进行评估直到卡住—都是在编译期间!

This has been explored public ally a bit in the Haskell space by Gabriel Gonzalez's Morte . 加布里埃尔·冈萨雷斯Gabriel Gonzalez)的《莫特(Morte)》已经在Haskell空间中进行了一些公共盟友的探索。

Now my understanding is that Haskell does not do very much of this—if any at all. 现在,我的理解是Haskell并不会做太多事情-如果有的话。 The cited reason I understand is that it is antithetical to separate compilation. 我理解的引用原因是分开编译是对立的。 This makes sense to prohibit partial evaluation across source-file boundaries, but it seems like in-file partial evaluation would still be an option. 禁止跨源文件边界进行部分评估是有道理的,但是似乎文件内部分评估仍然是一种选择。

As far as I know, in-file partial evaluation is still not performed, though. 据我所知,尽管如此,仍未执行文件内部分评估。

My question is: is this true? 我的问题是:这是真的吗? If so, what are the tradeoffs for performing in-file partial evaluation? 如果是这样,执行文件内部分评估的权衡是什么? If not, what is an example file where one can improve compiled performance by putting more functionality into the same file? 如果不是,那么一个示例文件可以通过在同一个文件中添加更多功能来提高编译性能?

(Edit: To clarify the above, I know there are a lot of questions as to what the best set of reductions to perform are—many are undecidable! I'd like to know the tradeoffs made in an "industrial strength" compiler with separate compilation that live at a level above choosing the right equational theory if there are any interesting things to talk about there. Things like compilation speed or file bloat are more toward the scope I'm interested in. Another question in the same space might be: "Why can't MLton get separate compilation just by compiling each module separately, leaving the API exposed, and then linking them all together?") (编辑:为澄清上述问题,我知道要执行的最佳减少量有很多问题,其中许多是不确定的!我想知道在“工业强度”编译器中进行的权衡取舍如果有什么有趣的事情要讨论,那就是选择合适的方程式理论,而编译速度或文件膨胀等问题更倾向于我感兴趣的范围。同一空间中的另一个问题可能是: “为什么仅通过单独编译每个模块,使API暴露,然后将它们全部链接在一起,MLton就无法获得单独的编译?”)

This is definitely an optimization that a small set of people are interested in and are pursuing. 这绝对是一小部分人感兴趣并追求的优化。 The Google search term to find information on it is "supercompilation". 用于查找有关其信息的Google搜索词是“超级编译”。 I believe there are at least two approaches floating about at the moment. 我认为目前至少有两种方法可供选择。

It seems one of the big tradeoffs is compilation-time resources (time and memory both), and at the moment the performance wins of paying these costs appear to be somewhat unpredictable. 似乎最大的折衷之一是编译时资源(时间和内存两者),目前支付这些成本的性能优势似乎有些不可预测。 There's quite some work left. 还有很多工作要做。 A few links: 一些链接:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM