简体   繁体   中英

Will GHC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?

This is related to the popular question of Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?

For a lazy functional programming language like Haskell, how would the compiler deal with this case?

This isn't a difference between strict and lazy but a product of the semantics of floating point numbers.

Haskell's Double has more or less the same semantics as C's double or Java's or most other languages. Very Smart And Qualified People have met and decided what the right way to represent rational decimal numbers is in binary and we've more or less stuck to it.

So the answer is No, since rounding is independent of order of evaluation.

While here we seem to be talking about the primop GHC has for adding floating point numbers, remember that + is extensible in Haskell and just a normal function. And new instances of the Num type class have no obligation to provide associative operations.

As a simple example, I have a library which let's the user build up C AST's in a Haskell DSL. To make it convenient to add things, I added a Num instance to my library. The AST for (a + b) + c and a + (b + c) are not the same! They "slope" different ways. If GHC started randomly moving about my parens I would definitely notice and be annoyed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM