Why do you hate spaces?
FlorenceMachine
@FlorenceMachine
Posts made by FlorenceMachine

RE: Easy DFS
I'm trying to port this to Python. For some reason this isn't working, can anyone spot where I messed up in the Java > Python conversion?
class Solution(object): def crackSafe(self, n, k): """ :type n: int :type k: int :rtype: str """ sb = '' total = int(k**n) for i in range(n): sb += '0' visited = set() visited.add(sb) self.dfs(sb, total, visited, n, k) return sb def dfs(self, sb, goal, visited, n, k): if len(visited) == goal: return True prev = sb[n+1:] for i in range(k): nnext = prev + str(i) if nnext not in visited: visited.add(nnext) sb += str(i) if self.dfs(sb, goal, visited, n, k): return True else: visited.remove(nnext) sb = sb[:1] return False ``

Python, O(nm^2) time, O(m^2) space, or O(n^3)/O(n^2) with optimization
Explanation/solution below, but I have a BIG question about the asymptotic analysis on this. Since
m
is getting squared in our time/space complexity, we could guarantee that we makem
the smaller of the width or height. If the width is smaller, we can do topdown as below. If the height is smaller, we could instead apply the same solution, but from left to right.In extreme examples, this makes a huge difference. For example, a 100x2 grid would take 2 * 100^2 (or 20,000) iterations when done topdown, but would only take 100 * 2^2 (or 400) iterations when done lefttoright.
If we add a check to the solution below to always approach this the right way (topdown for narrow grids, leftright for wide grids), would it be an improvement to the asymptotic analysis? The new time/space complexity with that optimization would be...
Time: O(n^3), Space: O(n^2), where
n
is either the width or height of the grid, whichever is smaller.Is this an asymptotically meaningful improvement?
Keep a record of the number of "topleft" and "topright" pairs that have been seen already... then when we find another pair that matches on a lower row, that new pair completes a rectangle for every matching pair we've seen, so add that to the count.
class Solution(object): def countCornerRectangles(self, grid): """ :type grid: List[List[int]] :rtype: int """ n, m = len(grid), len(grid[0]) count = 0 if n < 2 or m < 2: return count seen = {} for row in range(n): for col in range(m  1): if not grid[row][col]: continue for i in range(col + 1, m): if grid[row][i]: if (col, i) not in seen: seen[(col, i)] = 0 count += seen[(col, i)] seen[(col, i)] += 1 return count

Why does this Python O(n) solution execute so much slower than the others?
I don't want to use builtin sort() or most_common() because I consider that cheating. My solution is O(n) as far as I can tell, and so are the others, but mine executes way way slower than the rest (it beats like 0.8%).
Is builtin sort() and most_common() really the only explanation, or am I missing something here?
class Solution(object): def frequencySort(self, s): """ :type s: str :rtype: str """ frequencies = [[] for _ in range(len(s))] counts = {} for c in s: if c not in counts: counts[c] = 0 counts[c] += 1 for c in counts: frequencies[counts[c]1].append(c) result = [] for i in range(len(frequencies)  1, 1, 1): for c in frequencies[i]: result += [c] * (i + 1) return result

If LRU Cache is "Hard", so is this
This question is roughly on the same level as LRU Cache in terms of difficulty, so why is this one "Easy" and LRU Cache "Hard"?
It's easier to get a solution accepted by writing suboptimal O(n) solutions I guess, but just because the wrong answer is easy doesn't mean the question is easy. This question is very complicated with a lot of different possibilities and time/space tradeoffs.
Harder than LRU Cache imo.

RE: C++ O(logN) for write ops, O(1) for reads
Can someone help translate this to Python? I'm trying but I can't figure out a Python equivalent of
map<int, vector<list<int>::iterator>> mp;

RE: Short Python / Ruby / C++
What would the space complexity be for this one?
If you modify it to execute
max()
only once per result ofmerge()
, I think the space complexity can come down toO(max(m, n))
?Or is it
O(m+n)
?O(max(m, n, k))
? 
RE: 7 lines Python, ~14 lines Java
So the time complexity is obviously
O(mn)
, but what is the space complexity?It looks like with recursive dfs calls on the stack we could express space complexity as
O(max(m, n))
? Is that right?Or I guess since it's probably not valid to say
O(max(..))
, does that mean the nearest approximation with nondominant factors removed would beO(m+n)
? 
RE: Python 140ms beats 100%, and works for Nsum (N>=2)
@sha256pki
O(n^(k1))
(careful with parentheses) but yes 
RE: [Java/C++] Straightforward dfs solution
Worstcase recursion depth will be
O(mn)
, wherem
is the width of the grid andn
is the height, correct?So space complexity will be
O(mn)
?