Click here to see the full article post
Thank you for the solution. The question states "the longest word in words that can be built one character at a time by other words". However, it did not explicitly specify that the previous word should be prefix. This made me assume ["d","ld","rld","orld", "world"] should be valid sequence.
I think this question should be stated correctly since I misunderstood we can add char at any position of previous str!
The description of this problem is not clear. I have to guess from the example that "previous word should be prefix"
Trie = lambda: collections.defaultdict(Trie) trie = Trie() for i, word in enumerate(words): reduce(dict.__getitem__, word, trie)[True] = i
One doesn't need to use a trie to achieve the same time & space complexities as in the second solution: we can just sort on word length and lexicographical ordering and then add words to a hashset if the word without last letter is in the set. In this case we can just update our solution every time we find a longer word which can be added to the set and because of the sorting on lexicographical ordering we will have the smallest lexicographical solution in the end.
@snailandmail sort? Thats nlogn comparisons for each character.
With trie, it's O(no. of character in all words). Its way faster.
Java solution for Approach #2 doesn't work. There is an issue in 'if' conditions in dfs method. The approach is nice though.
@soboltv Can you explain further? All solutions in every article have always been checked to get AC with the leetcode test cases. Is there some test case that fails?
Can some one explain what the Python solution does in the beginning?
This part -
Trie = lambda: collections.defaultdict(Trie)
trie = Trie()
END = True
for i, word in enumerate(words): reduce(dict.__getitem__, word, trie)[END] = i
It's for building Trie. The lambda is equivalent to this
def Trie(): return defaultdict(Trie)
So Trie() returns a default dict which when the key bening accessed is absent, it will create a Trie for the key.
reduce build trie for each character. By calling trie's (defaultdict) accessor method.
reduce returns the leave trie, and it sets its
True key to be the word's index for later use.
@derek3 You are right, although complexity will be O(nlogn * M) where n is number of words and M is maximum word length when proposed complexity is O(n * M).
@snailandmail I wasn't accurate. but I think for this question, M should be the average word length instead of longest word length.
@derek3 Yes, you are right
Store each word in the node that ends it, the subsequent DFS is easier to see. I did not add interning, but with interning's help, the space overhead is negligible in Java. Even without interning, assume the OJ provides all test cases as compile-time data, all strings are automatically interned anyway.
Further explanation on the DFS that @awice used in the Java solution: this is an inversed pre-order DFS: node-right-left order. In the DFS traversal, note that if
node.end <= 0 (ignoring special case of
root), then the DFS does not go further in this path: if a path breaks in the middle, then anything further down can't be the final answer: at least one of its prefix is missing.
words.sort(key = lambda c: (-len(c), c))
Can someone explain how this work?, what is c? sort by( long word length first, different word groups) ?
@awice Hi, Can you explain a bit more about why time complexity of approach #1 is O(sigma(Wi^2))? In my perspective, since we already have a hash set that contains all words, the search operation is O(1). For each word, we only need O(Wi) to find if all prefixes are in the hash set. Thus, the overall time complexity is O(sigma(Wi)). Sorry if my question is dump...
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.