简体   繁体   中英

What is this functional programming optimization called?

Consider the following Haskell code for computing the nth Fibonacci number.

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)

This code is slow. We can optimize it by refactoring to a helper function that computes "iteratively", by "storing" all the relevant data to compute the recurrence with in its arguments, along with a "counter" that tells us how long to compute for.

fastfib :: Int -> Int
fastfib n = helper 1 0 n
  where
    helper a _ 1 = a
    helper a b i = helper (a + b) a (i - 1)

It seems like this optimization could apply more broadly as well. Does it have a name, in the functional programming community or elsewhere?

Yes, it's called accumulating parameter technique. (Here's one of my answers , about it).

It's closely related to , tail-recursion-modulo-cons ("TRMC"), , hylomorphism s, folding , etc.

Monoids enable the re-parenthesization

a+(b+(c+...)) == (a+b)+(c+...) == ((a+b)+c)+...

which enables the accumulation. TRMC (which came to be explicitly known in the context of Prolog) is the same, just with lists;

[a]++([b]++([c]++...)) == ([a]++[b])++([c]++...) == (([a]++[b])++[c])++...

and corecursion builds lists in top-down manner just like TRMC does.

The answer linked above contains a link to the technical report from 1974 by Friedman and Wise, which essentially talks about accumulation in the context of the + monoid, as an example.

There's no monoids in the Fibonacci example, but there's an accumulation of "knowledge" so to speak, as we go along from one Fibonacci number to the next.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM