# Super Short and Easy Java DFS Solution :)

• The idea is allocate number into bucket. Start from large number with reduce DFS calls.
Target is sum of each group, equals sum / k.

``````public boolean canPartitionKSubsets(int[] nums, int k) {
if(nums == null || nums.length == 0)  return false;
int sum = 0;
for(int num: nums)  sum += num;
if(sum % k != 0)  return false;
Arrays.sort(nums);
int[] bucket = new int[k];
return dfs(nums, bucket, nums.length - 1, sum / k);
}

public boolean dfs(int[] nums, int[] bucket, int index, int target) {
if(index == -1)  return true;
for(int i = 0; i < bucket.length; i++) {
if(bucket[i] + nums[index] <= target) {
bucket[i] += nums[index];
if(dfs(nums, bucket, index - 1, target))  return true;
bucket[i] -= nums[index];
}
}
return false;
}``````

• @GardenAAA Could you explain why using `sort` before DFS?

• @Aoi---silent I was thinking that if we put large number into bucket first, then small number will have less options which can reduce dfs calls. For example, 1,1,1,1,1,4,4,4,4,4
If we start from left, the first 1 will try 1, 11, 111, 1111, 11111. The second 1 will try 1, 11, 111, 1111.... However, if we start from right, it will have really less dfs calls.

• @GardenAAA Nice solution! What will be the runtime of this algorithm? For me as guess, for every element, you put it into k buckets (k recursive calls), so something like O(K ^ N)?

• @simonzhu91 I also had a similar approach, but without the sorting and traverse from large number, it really have a difference. Time limit exceeded -> AC

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.