11-liner 9ms DFS solution (detailed explanation)


  • 21

    With all the given transactions, in the end, each person with ID = id will have an overall balance bal[id]. Note that the id value or any person coincidentally with 0 balance is irrelevant to debt settling count, so we can simply use an array debt[] to store all non-zero balances, where

    • debt[i] > 0 means a person needs to pay $ debt[i] to other person(s);
    • debt[i] < 0 means a person needs to collect $ debt[i] back from other person(s).

    Starting from first debt debt[0], we look for all other debts debt[i] (i>0) which have opposite sign to debt[0]. Then each such debt[i] can make one transaction debt[i] += debt[0] to clear the person with debt debt[0]. From now on, the person with debt debt[0] is dropped out of the problem and we recursively drop persons one by one until everyone's debt is cleared meanwhile updating the minimum number of transactions during DFS.

    Note: Thanks to @KircheisHe who found the following great paper about the debt settling problem:

    The question can be transferred to a 3-partition problem, which is NP-Complete.

    public:
        int minTransfers(vector<vector<int>>& trans) {
            unordered_map<int, long> bal; // each person's overall balance
            for(auto& t: trans) bal[t[0]] -= t[2], bal[t[1]] += t[2];
            for(auto& p: bal) if(p.second) debt.push_back(p.second);
            return dfs(0, 0);
        }
        
    private:
        int dfs(int s, int cnt) { // min number of transactions to settle starting from debt[s]
        	while (s < debt.size() && !debt[s]) ++s; // get next non-zero debt
        	int res = INT_MAX;
        	for (long i = s+1, prev = 0; i < debt.size(); ++i)
        	  if (debt[i] != prev && debt[i]*debt[s] < 0) // skip already tested or same sign debt
        	    debt[i] += debt[s], res = min(res, dfs(s+1,cnt+1)), prev = debt[i]-=debt[s];
        	return res < INT_MAX? res : cnt;
        }
        
        vector<long> debt; // all non-zero balances
    

  • 0
    A

    I like your solution!

    One question, why do you use pre? just to record the number you have tried in previous step? Then why not use a hash table, to record all that have been tried?


  • 4

    @angrybirdR said in 11-liner 9ms DFS solution (detailed explanation):

    I like your solution!

    One question, why do you use pre? just to record the number you have tried in previous step? Then why not use a hash table, to record all that have been tried?

    Thanks for your comment. Actually, I thought about using unordered_set<long> prev to save all previously tested distinct debt[i]:

        	int res = INT_MAX; unordered_set<long> prev;
        	for (long i = s+1; i < debt.size(); ++i)
        	  if (!prev.count(debt[i]) && debt[i]*debt[s] < 0) 
        		debt[i] += debt[s], res = min(res, dfs(s+1, cnt+1)), prev.insert(debt[i]-=debt[s]);
    

    However, this change increased total test running time from 9ms (65.66%) to 179ms (11.78%). I guess the overhead cost to read/write from the hash table overshadowed the duplicate handling with OJ's testing cases. It is really test data dependent. In generic case, I think the hash table will be effective (e.g., debt[] = [-10,6,7,-13,6,7,-16,6,7] with multiple duplicates)


  • 0
    A

    @zzg_zzm cool~~


  • 0
    B
    This post is deleted!

  • 6

    Java Version

    public int minTransfers(int[][] transactions) {
      Map<Integer, Long> map = new HashMap();
      for(int[] t: transactions){
        long val1 = map.getOrDefault(t[0], 0L);
        long val2 = map.getOrDefault(t[1], 0L);
        map.put(t[0], val1 - t[2]);
        map.put(t[1], val2 + t[2]);
      }        
    
      List<Long> list = new ArrayList();
      for(long val: map.values()){
        if(val != 0) list.add(val);
      }
      
      Long[] debts = new Long[list.size()];
      debts = list.toArray(debts);
      return helper(debts, 0 , 0);
    }
    
    int helper(Long[] debts, int pos, int count){
      while(pos < debts.length && debts[pos] == 0) pos++;
      int res = Integer.MAX_VALUE;
      long pre = 0;
      for(int i = pos + 1; i < debts.length; i++){
        if(debts[i] != pre && debts[pos] * debts[i] < 0){
          debts[i] += debts[pos];
          res = Math.min(res, helper(debts, pos + 1, count + 1));
          debts[i] = debts[i] - debts[pos];
          pre = debts[i];
        }
      }
      return res == Integer.MAX_VALUE ? count : res;
    }

  • 1
    K

    @zzg_zzm Have you tried sorting the balances at the start to make the prev check more effective?


  • 0
    L
    This post is deleted!

  • 0
    L

    @zzg_zzm said in 11-liner 9ms DFS solution (detailed explanation):

    for (long i = s+1, prev = 0; i < debt.size(); ++i)

    I think that if you use "long prev" instead of use "unordered_set<long> prev" you can only prevent from traversing those duplicate values which are happen to be next to[I mean this kind of "next" -> debt[i] * debt[s] < 0) the value you traversed the previous run, while use "unordered_set<long> prev" could fix this scenario.


  • 0

    @LeonCheng Actually, I thought about using unordered_set to save all previously tested values. However, for some reason, this change takes longer time to run all OJ's tests. Please refer to the 3rd post in this thread.


  • 5
    B

    There is no need to have the pre

    public class Solution {
        public int minTransfers(int[][] transactions) {
            Map<Integer, Long> map = new HashMap();
            for(int[] t: transactions){
                long val1 = map.getOrDefault(t[0], 0L);
                long val2 = map.getOrDefault(t[1], 0L);
                map.put(t[0], val1 - t[2]);
                map.put(t[1], val2 + t[2]);
            }        
            
            List<Long> list = new ArrayList();
            for(long val: map.values()){
                if(val != 0) list.add(val);
            }
            
            Long[] debts = new Long[list.size()];
            debts = list.toArray(debts);
            return helper(debts, 0 , 0);
        }
            
        int helper(Long[] debts, int pos, int count){
            while(pos < debts.length && debts[pos] == 0) pos++;
            if (pos >= debts.length) {
                return count;
            }
            int res = Integer.MAX_VALUE;
            for(int i = pos + 1; i < debts.length; i++){
                if(debts[pos] * debts[i] < 0){
                  debts[i] += debts[pos];
                  res = Math.min(res, helper(debts, pos + 1, count + 1));
                  debts[i] = debts[i] - debts[pos];
                }
            }
            return res;
        }
    
    }

  • 0

    Can you talk more about "question can be transferred to a 3-partition problem"? How a 3-partition problem can be reduced to current problem?


  • 1

    @0x0101 said in 11-liner 9ms DFS solution (detailed explanation):

    Can you talk more about "question can be transferred to a 3-partition problem"? How a 3-partition problem can be reduced to current problem?

    In the referenced paper Settling Multiple Debts Efficiently: An Invitation to Computing Science by T. Verhoeff, June 2003, Section "8.3 Minimum number of transfers" explains the relationship between the debt settling problem and 3-partition problem.


  • 0

    @hot13399
    Why do you need
    while(pos < debts.length && debts[pos] == 0) pos++;
    Because when iterating the hashmap values, only no-zero values will be added, then why do you need to exclude zero values again?


  • 0
    C

    @Tōsaka-Rin debt[i] += debt[s] will make new 0 in the array so we need to omit these 0s.


  • 0
    H

    Why convert list to array? Can we directly use list in helper method?


  • 0

    @huangw3 I see what you meant.

    To abandon unordered_map<int, long> bal and use vector<int> debt, which are all non-zero entries from the map, can help to avoid visiting zero entries initially since people who have no debt are actually completely unrelated to the problem. Plus, the exact representation of people's identity t[0] does not really matter to this problem, so I think using a light weight container vector than unordered_map can help for performance as well.


  • 0

    Do you think the time complexity of this algorithm is N!. The paper explicitly said "The algorithmic
    complexity is not investigated." :(


  • 1

    @aps105 Yes, I believe the worst scenario has time complexity of O(N!).

    Clearly, for N people, using at most N-1 transactions will guarantee to clear all debts. So the question is in which order gives the optimal answer. The DFS algorithm basically just exhausts all possible such orders. To order N people with last two orderless, there are N!/2 ways. So the algorithm should be O(N!). The worst case should be there is no subset which has overall net zero debt.


  • 0
    G

    You can sort the debt to make skipping more efficient. The prev variable can only skips adjacent elements.


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.