@harshaneel said in Simple Python:

@StefanPochmann That's an elegant solution. I am just wondering what would be the time complexity of this?

I am only speaking in terms of C++ since I am not familiar with Python.

Without consideration of the time complexity of array hasher (not sure how Python handles it by default; C++ coder needs to define their own), the time complexity should be `O(2^N)`

in worst case (when given array `nums`

itself is increasing without duplicates).

Note that the final result `sub`

of `num[i=1:N]`

is constructed from the previous result based on `nums[i=1:N-1]`

, so each additional new subsequences while looping through `for num in nums`

are

`[nums[1]]`

`[nums[2]]`

, `[nums[1], nums[2]]`

`[nums[3]]`

, `[nums[1], nums[3]]`

, `[nums[2], nums[3]]`

, `[nums[1], nums[2], nums[3]]`

...

So there are maximum possible `sub[k-1].size+1`

new subsequences added when processing `nums[k]`

, where `sub[k-1].size`

is the number of subsequences previously found. Therefore, the overall number of subsequences is `O(2^N)`

.

Since duplicated sequences are not allowed, in my opinion, it might not be trivial for hash set of array to compute hash values. Again, I am not sure how Python handles it. But theoretically, the overall time complexity should be `O(2^N*hasher(N))`

, where `hasher(N)`

is the hash function on arrays with maximum size `N`

.

Therefore, I think the challenging point of this problem is how to handle duplicates of big size data (e.g., arrays). So choice of programming languages could provide convenience here.