Thanks for sharing, so far the only explanation I think is reasonable.
Thanks
Thanks for sharing, so far the only explanation I think is reasonable.
Thanks
Got TLE when use BFS to search, while DFS can AC.
I think both BFS and DFS have the same time complexity:
Don't consider wildcard .
:
Both takes O(wordLen) to determine if a word exists.
Consider wildcard .
:
both will have to traverse all the nodes in trie when try to find a none exist long word with many '.'s.
for example:
add(abc)
add(ade)
add(afg)
then
search(...z)
Both dfs and bfs have to traverse all the node to find out no matching word exists.
Is my analyse correct? If yes, then why TLE when using BFS?
public class WordDictionary {
TrieNode root = new TrieNode();
// Adds a word into the data structure.
public void addWord(String word) {
root.add(word);
}
// Returns if the word is in the data structure. A word could
// contain the dot character '.' to represent any one letter.
public boolean search(String word) {
//TLE for bfs(word)
return dfs(word, 0, root);
}
private boolean dfs(String word, int start, TrieNode node){
if(start == word.length()){
return node.isEnd;
}
char c = word.charAt(start);
if(c == '.'){
for(TrieNode child: node.childMap.values()){
if(dfs(word, start + 1, child)){
return true;
}
}
}else{
if(node.childMap.containsKey(c)){
TrieNode child = node.childMap.get(c);
if(dfs(word, start + 1, child)){
return true;
}
}
}
return false;
}
public boolean bfs(String word) {
int i = 0;
LinkedList<TrieNode> q = new LinkedList<>();
q.add(root);
while(!q.isEmpty() && i < word.length()){
char c = word.charAt(i);
Set<TrieNode> level = new HashSet<>();
for(int k = 0; k < q.size(); k++){
TrieNode node = q.remove();
if(c == '.'){
level.addAll(node.childMap.values());
}else{
if(node.childMap.containsKey(c)){
level.add(node.childMap.get(c));
}
}
}
q.addAll(level);
i++;
}
if(i == word.length()){
while(!q.isEmpty()){
if(q.remove().isEnd){
return true;
}
}
}
return false;
}
class TrieNode{
char c = ' ';
Map<Character, TrieNode> childMap = new HashMap<Character, TrieNode>();
boolean isEnd = false;
public TrieNode(){
//
}
public TrieNode(char c){
this.c = c;
}
public void add(String str){
TrieNode runner = this;
for(int i = 0; i < str.length(); i++){
char c = str.charAt(i);
if(!runner.childMap.containsKey(c)){
runner.childMap.put(c, new TrieNode(c));
}
runner = runner.childMap.get(c);
}
runner.isEnd = true;
}
}
}
EDIT:
While when searching word like: **c
in above case, DFS is better, cause it can exit immediately when it see a match, while BFS have to traverse all the layers before reaching that end.
So searching for a none exist word? Same.
Searching for an exist word, DFS can be better.
I would say that the coding style is not good.
Why it's N^N? I think in worst case (any substring is a valid word) there are 2^N solutions instead of N^N
Suppose f(n) represents the total # of outputs, then since every substring is valid:
f(n) = f(n - 1) + f(n -2) + ... + f(1)
so
f(n - 1) = f(n - 2) + f(n -3) + ... + f(1)
=> f(n) - f(n - 1) = f(n - 1)
=> f(n) = 2* f(n - 1) = ... = 2^N
Upvote. It's true that the solution by iaison is beautiful, but the space complexity is still o(N), since we create N new nodes
Nice way of reverse a linked list, though I am more used to do it in the old way: keep reverse the direction of link from start to end.
It's simply WRONG if put reverseKGroup(curr, k); at the end.
simply recursive will re-compute many duplicate cases, for example, for 123XXXX, you'll comput 3XXXX first in A-B-3XXXX(1-2-3XXXX), then again in l-3XXXX(12-3XXXX), you can add a cache to avoid duplications.
@harsh013 mid means average of smallest and biggest, instead of mid index, so here, mid should be 4(= (1 + 8)/2 ) instead of 3(= array[4])