简体   繁体   中英

Minimum Coin Change Leetcode problem (Dynamic Programming)

Here's the question: https://leetcode.com/problems/coin-change/

I'm having some trouble understanding two different methods of dynamic programming used to solve this problem. I'm currently going through the Grokking Dynamic Programming course from educative.io, and their approach is to use subsets to search for each combination. They go about testing if a coin is viable, if so, then try it in the DFS. If not, skip the coin and go to the next index and try the next coin.

Here's Grokking's approach with memoization:

    def coinChange(self, coins: List[int], amount: int) -> int:
        def dfs(i, total, memo):
            key = (i, total)
            if key in memo:
                return memo[key]
            if total == 0:
                return 0
            if len(coins) == 0 or i >= len(coins):
                return inf
            
            count = inf
            if coins[i] <= total:
                res = dfs(i, total - coins[i], memo)
                if res != inf:
                    count = res + 1
                    
            memo[key] = min(count, dfs(i + 1, total, memo))
            return memo[key]
        return dfs(0, amount, {}) if dfs(0, amount, {}) != inf else -1

It doesn't do very well on Leetcode; it runs very slowly (but passes, nonetheless). The efficient algorithm that was in the discussions was this:

    def coinChange(self, coins: List[int], amount: int) -> int:
        @lru_cache(None)
        def dp(sum):
            if sum == 0: return 0
            if sum < 0: return float("inf")

            count = float('inf')
            for coin in coins:
                count = min(count, dp(sum - coin))

            return count + 1
        return dp(amount) if dp(amount) != float("inf") else -1

Does this second code have the same logic as "testing the subsets of coins?" What's the difference between the two? Is the for-loop a way of testing the different subsets, like with backtracking?

I tested the second algorithm with memoization in a dictionary, like the first, using sum as the key, and it tanked in efficiency. But then I tried using the @lru_cache with the first algorithm, and it didn't help.

Could anyone explain why the second algorithm is so much faster? Is it my memoization that sucks?

Does this second code have the same logic as "testing the subsets of coins?"

If with subset you mean the subset of the coins that is still available for selection, then: no. The second algorithm does not reduce the problem in terms of coins ; it reasons that at any time any coin can be selected, irrespective of previous selections. Although this may seem inefficient as it tries to take the same combinations in all possible permutations, this downside is minimised by the effect of memoization.

What's the difference between the two?

  • The first one takes coins in the order they are given, never going back to take an earlier coin once it has decided to go to the next one. So doing, it tries to reduce the problem in terms of available coins . The second one doesn't care about the order and looks at any permutation, it only reduces the problem in terms of amount .
  • This first one has a larger memoization collection because the index is part of the key, whereas the second uses a memoization collection that is only keyed by the amount.
  • The first one makes a recursive call even when no coin is selected (the one at the end of the inner function), since that fits in the logic of reducing the problem to fewer coins . The second one only makes a recursive call when the amount is further reduced.

Is the for-loop a way of testing the different subsets, like with backtracking?

If with subset you mean that the problem is reduced to fewer coins, then no: the second algorithm doesn't attempt to apply that methodology. The for loop is just a way to consider every coin. It doesn't reduce the problem size in terms of available coins , only in terms of remaining amount .

Could anyone explain why the second algorithm is so much faster?

It is faster because the memoization key is smaller, leading to more hits, leading to fewer recursive calls. You can experiment with this and add global counters that count the number of executions of both inner functions ( dfs and dp ) and you'll see a dramatic difference there.

Is it my memoization that sucks?

You could say that, but it is too harsh.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM