public class Solution {
public int nthSuperUglyNumber(int n, int[] primes) {
int len=primes.length;
int[] index=new int[len];
Arrays.fill(index,1);
int[] fac = new int[len];
for(int i=0;i<len;i++) fac[i]=primes[i];
int ugly[] = new int[n];
ugly[0]=1;
for(int i=1;i<n;i++){
int min=findMin(fac);
ugly[i]=min;
for(int j=0;j<len;j++){
if(min==fac[j]){
fac[j]=primes[j]*ugly[index[j]++];
}
}
}
return ugly[n1];
}
public int findMin(int[] a){
int min = a[0];
for(int i=1;i<a.length;i++)
min=Math.min(min, a[i]);
return min;
}
}
Easy Java Solution. Beats 97%


Continuing my suggestion above so
findMin
returns an index. @alex153 is right, the duplicates are not handled then, here's an idea to mitigate that:findMin
does a bestcase linear scan, and thenj
does a bestcase linear scan again. That could be collapsed into one bestcase linear scan and one worstcase linear scan on a cost of an extraprimes.length
size array (which doesn't increase nominal space complexity). The following ran in 19ms, 99.39%:public int nthSuperUglyNumber(int n, int[] primes) { int[] index = new int[primes.length]; int[] fac = primes.clone(); // faster than Arrays.copyOf(primes, primes.length) int[] minIndices = new int[primes.length]; int[] ugly = new int[n]; Arrays.fill(index, 1); ugly[0] = 1; for (int i = 1; i < n; i++) { int countMinIndices = findMins(fac, minIndices); ugly[i] = fac[minIndices[0]]; for (int j = 0; j < countMinIndices; ++j) { int minj = minIndices[j]; fac[minj] = primes[minj] * ugly[index[minj]++]; } } return ugly[n  1]; } public int findMins(int[] a, int[] minIndices) { int min = Integer.MAX_VALUE; int count = 0; for (int i = 0; i < a.length; i++) { if (a[i] < min) { min = a[i]; // smaller min found count = 0; // restart collecting min indices minIndices[count++] = i; } else if (a[i] == min) { minIndices[count++] = i; } } return count; }
It is still the same algorithm regarding writing and updating
ugly
,fac
andindex
arrays, just fewer for loop iterations to achieve the updates.Note that this is likely a premature optimization, as most of time LeetCode's tests don't reflect time complexity differences well enough to rank algorithms based on their
ms
results.