@newdev So I think there is some bug in the OJ. I passed the test with the provided interface without adding any int in the sum function. But the Run Code function seems to have another interface, so I have to change the interface of the sum function to run the program.
actionlee0405
@actionlee0405
Posts made by actionlee0405

RE: Runtime error  constructor not being called.

RE: Runtime error  constructor not being called.
Right now I can't run the code through c++ version, it says
Line 64: no matching function for call to 'Excel::sum(int, char, std::vector<std::__cxx11::basic_string<char> >, int)' under the following code:class Excel { public: Excel(int H, char W) { } void set(int r, char c, int v) { } int get(int r, char c) { return 0; } int sum(int r, char c, vector<string> strs) { return 0; } };
And also, the expected answer pop out such message (after hitting run code button):
Line 23: error: method sum in class Excel cannot be applied to given types;The test input is :
["Excel","get","set","get"]
[[3,"C"],[1,"A"],[1,"A",1],[1,"A"]].Any one has this issue the same as me?

RE: Minimum swaps need to make k girls sitting together
So we can solve this question by sliding window method. We always keep a window that contains k girls' position and process the string char by char. If we encounter a boy, we do nothing; otherwise we move the window to contain the current girl and pop out the left most girl.
To solve the minimum swaps needed in the current window, one can find the median position of k girls in the window and calculate the distances from the other girls to the girl in the median position. For example, consider string = "B[GGGBBBBG]BBGGBGG", the median position in the current window is 2 or 3 (indexed from 0). Let the median position be 2, the total distances from all the other girls is 1 + 1 + 6 = 8. Then you minus the fixed adjust distance (1 + 1 + 2) = 4 and get the minimum number of swaps = 4. The fixed adjust distance here is because you move all girls into the median position, but the girls are sitting in a consecutive row instead of the same position.
Therefore we already have a O(kn)time O(k)space algorithm, k is for the time to solve the minimum number of swaps in the current window and n is for the time to scan the whole string.
Of course you can improve the algorithm to be O(n) time. This is because when the window is sliding to the next one, the median position can be tracked easily if the window is implemented by a linked list. Regarding updating the distances to the median position, let the left_distance be the total distances from the girls to the left of the median girl and similarly for the right_distance. Let m be the median position and m' be the new median position. Consider the window slides to the next girl at position j and pops the most left girl at position i. The the new left_distance = left_distance  (m  i) + (m'  m) * ((k  1) / 2) and the new right_distance = right_distance + (j  m')  (m'  m) * (k / 2). Therefore we can calculate the minimum number of swaps in constant time.
Here is the code:int minSwapNum(string& s, int k) { list<int> window; int i = 0; while ((i < s.size()) && (window.size() < k)) { if (s[i] == 'G') { window.push_back(i); } i++; } //No enough girls if (window.size() < k) { return 1; } auto med_it = window.begin(); advance(med_it, (k  1) / 2); int left_dist = 0; int right_dist = 0; for (auto idx : window) { if (idx < *med_it) { left_dist += (*med_it  idx); } else { right_dist += (idx  *med_it); } } int adjust_dist = (k % 2 == 0) ? k * k / 4 : (k * k  1) / 4; int min = left_dist + right_dist  adjust_dist; for (; i<s.size(); i++) { if (s[i] == 'G') { int head = window.front(); int tail = i; int old_med = *med_it; window.pop_front(); window.push_back(tail); med_it++; left_dist = left_dist  (old_med  head) + (*med_it  old_med) * ((k  1) / 2); right_dist = right_dist + (tail  *med_it)  (*med_it  old_med) * (k / 2); if (left_dist + right_dist  adjust_dist < min) { min = left_dist + right_dist  adjust_dist; } } } return min; }

RE: O(n*k^2) java solution with Trie structure (n: total number of words; k: average length of each word)
@fun4LeetCode Hi, just a suggestion to improve the time complexity down to O(nk). It seems when you hit some wordend node in the trie, you perform a palindrome check for the searching word starting from some index i to some index j. As when you search along the trie from root down to the leaf, you may encounter O(k) such words and that results O(k^2) for each word.
Actually if you preprocess the searching word, for example, using manacher's algorithm in O(k) time and get the table, it is very easy to determine if substring s[i...j] is palindrome or not in constant time. To illustrate, consider s = "aaaba" and s' = "aaaba", after manacher's algorithm, you get p[i] = longest palindrome length centered at s'[i]. To check if s[i...j] is a palindrome, simply check if p[i+j] >= j  i + 1.

12ms O(n) solution using palindrome tree (NOT KMP)
In this problem, we simply want to know the longest prefix palindrome substring of s. For example, s = "aabba", "aa" is the one we need to know and we only need to patch "abb" in front of s. Here I want to introduce another very powerful and general data structure that can solve many questions related to palindrome substrings in a string.
The data structure is called palindrome tree. The nodes in the tree are the longest SUFFIX palindrome substrings in s[0...i] for 0 <= i < n, where n = s.size(). Besides, two additional roots are introduces.
Each node having A as the palindrome, has the length (A.size()), suffix links which points to the longest proper suffix palindrome substring of itself, and the out going edges (each edge is labeled with a char x and points to another palindrome node xAx, where A is the palindrome of the node itself). Regarding the two additional roots, one is empty string and its suffix link points to another root. The other one is a pseudo string having length 1 and has no suffix link. (You will see why in the code)
To build up the tree, you need to process the string char by char from s[0] to s[n1]. When processing s[i], we start from the palindrome node A, which is the longest suffix palindrome in s[0...i1]. We check if s[i] == s[i  A.size()  1]. If so, we can extend A to xAx, otherwise we go to node pointed by A's suffix link and repeat the check until either we could extend at some node or we hit the root node with 1 length. Let the found node be B. We could easily check if the palindrome is already existed by looking at the B's outgoing edges. If it is not existed, we create a new one and draw an edge from B to the new node with label s[i], then we continue to find B's longest suffix palindrome that can be extended at both side with s[i] and make the suffix link of the new node pointing to the found node; otherwise, do nothing. Finally, we update the last pointer to point the longest suffix palindrome node in s[0...i]. Therefore, to compute the longest prefix palindrome substring, we simply reverse s and process char by char to the end. The longest suffix palindrome of s.reverse() is the one we need. It is not hard to prove that the size of the tree is O(n) and its time complexity is also O(n).
Compare to KMP, we don't need to add another string to do a trick. It is straightforward. Besides, it can answer so many palindrome substring problems. For more detail, please refer to :
http://adilet.org/blog/250914/
http://www.geeksforgeeks.org/palindromictreeintroductionimplementation/struct PalindromeNode { unordered_map<char, PalindromeNode*> out; PalindromeNode* suffix; int len; PalindromeNode(int l) : len(l), suffix(NULL) {}; }; class Solution { public: int findLongestSuffixPalindrome(string& s) { PalindromeNode* root1 = new PalindromeNode(0); PalindromeNode* root2 = new PalindromeNode(1); root1>suffix = root2; PalindromeNode* last = root1; for (int i = 0; i < s.size(); i++) { int j = i  last>len  1; while ((j < 0)  (s[j] != s[i])) { last = last>suffix; j = i  last>len  1; } if (last>out.find(s[i]) == last>out.end()) { PalindromeNode* node = new PalindromeNode(last>len + 2); last>out[s[i]] = node; //Find suffix link for the new node last = last>suffix; if (last == NULL) { //node is a single char node>suffix = root1; } else { while ((last>suffix != NULL )) { j = i  last>len  1; if (s[j] == s[i]) { break; } last = last>suffix; } node>suffix = last>out[s[i]]; } last = node; } else { last = last>out[s[i]]; } } return last>len; } string shortestPalindrome(string s) { if (s.size() == 0) { return s; } string t = s; reverse(t.begin(), t.end()); int prefixPalLen = findLongestSuffixPalindrome(t); s = string(t.begin(), t.end()  prefixPalLen) + s; return s; } };

RE: Find the longest palindrome prefix.
@layor Indeed I like the way you think to use hashing in this problem. But I have one suggestion. That is don't use 1e9+7 as the base but as the modulo. I can think about that you are using 2^64 as the default modulo as you don't care the overflow problem. But it is suggested that the modulo is better to be a prime.

c++ code KMP+boolean array 26ms, Hashing + intervals 145ms, Trie + boolean array 160ms.
Let string s.size() = m, dict.size() = k, longest word length in dict is n. The question has two parts. The first is to find occurrences of words in s and the second is how to tag given these occurrences.
 The first solution is to use KMP algorithm. For each word in dict, we search all occurrences of word in s, and use boolean array to tag. The time complexity is O(k(m + n)). The answer accepted in around 26ms.
class Solution { public: vector<int> preprocessKMP(string& p, string& s) { vector<int> ret(p.size());//ret[i] means the longest proper prefix that is also a suffix in p[0...i] for (int i=1; i<p.size(); i++) { int len = ret[i1]; while ((len > 0) && (p[i] != p[len])) { len = ret[len  1]; } if (p[i] == p[len]) { ret[i] = len + 1; }//else ret[i] == 0 } return ret; } string addBoldTag(string s, vector<string>& dict) { if (s.size() == 0) { return s; } vector<bool> bold(s.size()); for (auto word : dict) { //Do the kmp search for the pattern word in s vector<int> kmp = preprocessKMP(word, s); int idx = 0; int last = 1; for (int i=0; i<s.size(); i++) { if (word[idx] == s[i]) { idx++; } else { while ((idx > 0) && (s[i] != word[idx])) { idx = kmp[idx  1]; } if (word[idx] == s[i]) { idx++; } } if (idx == kmp.size()) { int start = max(last + 1, i  (int)kmp.size() + 1); fill(bold.begin() + start, bold.begin() + i + 1, true); last = i; idx = kmp[idx  1]; } } } //Tag string s using boolean array bold string ret; bool state = true; for (int i=0; i<s.size(); i++) { if (state && bold[i]) { ret += "<b>"; state = false; } else if (!state && !bold[i]) { ret += "</b>"; state = true; } ret.push_back(s[i]); } if (!state) { ret += "</b>"; } return ret; } };
 The second solution is to use hashing + interval merging. Simply enumerate all possible substrings (double forloops) and to check if it is in dict by hashing, then merge it into pos arrays, which stores the intervals need to be bold. Time complexity in average is O(m^2 + kn), first term for substrings enumeration, second term for hash map building. Accepted in around 1100ms.
However, we can improve it a little bit. When enumerating the substrings starting at s[i], we only have to enumerate the ending char from s[i] + (maxLength of word in dict) down to s[i+1]. The time complexity is now O(mn + kn). Accepted in around 145ms.
class Solution { public: string addBoldTag(string s, vector<string>& dict) { if (s.size() == 0) { return s; } unordered_set<string> hash; vector<pair<int, int>> pos; int maxLen = 0; //Build hash map for (auto word : dict) { hash.insert(word); maxLen = max(maxLen, (int)word.size()); } //Enumerate substrings and record tag intervals for (int i = 0; i < s.size(); i++) { int end = min(i + maxLen  1, (int)s.size()  1); string t = string(s.begin() + i, s.begin() + end + 1); int start = (pos.size() > 0) ? pos.back().second + 1 : 1; start = max(i, start); //Enumerate substrings starting at s[i], ending char is in s[start...end] for (int j = end; j >= start; j) { if (hash.find(t) != hash.end()) { pair<int, int> p = {i, j}; if ((pos.size() == 0)  (i > pos.back().second + 1)) { pos.push_back(p); } else { pos.back().second = j; } break; } t.pop_back(); } } //Tagging intervals if (pos.size() == 0) { return s; } string ret = (pos[0].first > 0) ? string(s.begin(), s.begin() + pos[0].first) : ""; for (int i=0; i<pos.size(); i++) { ret += "<b>"; ret += string(s.begin() + pos[i].first, s.begin() + pos[i].second + 1); ret += "</b>"; if (i < pos.size()  1) { ret += string(s.begin() + pos[i].second + 1, s.begin() + pos[i+1].first); } } if (pos.back().second < s.size()  1) { ret += string(s.begin() + pos.back().second + 1, s.end()); } return ret; } };
 The third solution using trie + boolean array solution. Build up a trie on the words in dict. Then for each index i in s, find the longest substring that matches words in trie and mark all the chars in this substring so that we know how to add the bold tag. The time complexity is O(kn + mn). The first term is the time to build a trie, the second term is the time for searching substrings of s in the trie. The solution is accepted in around 160ms.
struct Node { bool isEnd; unordered_map<char, Node*> children; Node() : isEnd() {} }; class Trie { Node* root; public: Trie() { root = new Node(); } void add(string& s) { Node* node = root; int idx = 0; while (idx < s.size()) { if (node>children.find(s[idx]) == node>children.end()) { Node* newNode = new Node(); node>children[s[idx]] = newNode; } node = node>children[s[idx]]; idx++; } node>isEnd = true; } int search(int idx, string& s) { Node* node = root; int ret = 1; while (idx < s.size()) { if (node>children.find(s[idx]) == node>children.end()) { return ret; } node = node>children[s[idx]]; if (node>isEnd) { ret = idx; } idx++; } return ret; } }; class Solution { public: string addBoldTag(string s, vector<string>& dict) { if (s.size() == 0) { return s; } vector<bool> bold(s.size()); //Build a trie Trie trie; for (auto word : dict) { trie.add(word); } //Search in s, mark bold array for (int i=0; i<s.size(); i++) { int idx = trie.search(i, s); for (int j=i; j<=idx; j++) { bold[j] = true; } } //Tag using boolean array bool state = true; string ret; for (int i=0; i<s.size(); i++) { if (state && bold[i]) { ret += "<b>"; state = false; } else if (!state && !bold[i]) { ret += "</b>"; state = true; } ret.push_back(s[i]); } if (!state) { ret += "</b>"; } return ret; } };

RE: TireNode solution
@tangalai said in TireNode solution:
public class Solution {
public string AddBoldTag(string s, string[] dict) {
TireNode root = new TireNode();foreach(var word in dict){ var current = root; foreach(var c in word){ if(current.children[(int)c] == null) current.children[(int)c] = new TireNode(); current = current.children[(int)c]; } } var dp = new bool[s.Length]; int len = s.Length; for(int i = 0;i<len;i++){ var current = root; int j = i; while(j < len && current.children[(int)s[j]] != null){ dp[j] = true; current = current.children[(int)s[j]]; j++; //Console.WriteLine(current == null); } } var result = s.ToCharArray().Select(x=>x+"").ToList(); bool p = false;
When you mark the dp array, it seems you don't check if a trie node is the end of a word in the dict but simply assign dp[j] = true. What happen for s = "abe123xyz", dict = ["abc", "xyz"]? I think your output will be "abexyz123" which is wrong.

c++ short code O(n^2) time and O(1) space solution
The algorithm asks to enumerate all triplets without duplicates, therefore it is always good to sort the array first. Then we enumerate (a, b, c) with a <= b <= c and c < a + b. To do this, we can try a and b as a double for loop, then to find the number of possible c, we can find the first element nums[idx] such that nums[idx] >= a + b. An ordinary way to do this is to use binary search as the array is already sorted. But we can do better if we observe that b is also in the increasing order to be enumerated. If we find idx such that nums[idx] >= a + b, then when we enumerate next possible b' >= b, we can start our pointer from idx to find the first element nums[idx'] > a + b'. The inner while loop iterate at most O(n) time during the inner for loop.
class Solution { public: int triangleNumber(vector<int>& nums) { sort(nums.begin(), nums.end()); int count = 0; for (int i=0; i<nums.size(); i++) { int idx = i+2; for (int j=i+1; j<nums.size()  1; j++) { if ((nums[i] > 0) && (nums[j] > 0)) { while ((idx < nums.size()) && (nums[idx] < nums[i]+nums[j])) { idx++; } count += idx  j  1; } } } return count; } };

RE: Java Solution, HashMap
@MutouMan said in Java Solution, HashMap:
> public class Solution { > public List<List<String>> findDuplicate(String[] paths) { > List<List<String>> output = new ArrayList<List<String>>(); > Map<String, List<String>> hashmap = new HashMap<String, List<String>>(); > if (paths.length == 0) return output; > for (int i = 0; i < paths.length; i++) { > String tmp = paths[i]; > String[] curr = tmp.split(" "); > for (int j = 1; j < curr.length; j++) { > String[] contents = curr[j].split("txt"); > String dit = curr[0] + "/" + contents[0] + "txt"; > List<String> dits = hashmap.getOrDefault(contents[1], new ArrayList<String>()); > dits.add(dit); > hashmap.put(contents[1], dits); > } > } > for (String key : hashmap.keySet()) { > if (hashmap.get(key).size() > 1) { > output.add(hashmap.get(key)); > } > } > return output; > } > }
Hi, I think your code is wrong because you use "txt" to split the file name and the content. It is better to use "(" instead of "txt" as the filename may contain txt as a substring.
It is totally fine to use List<String> as each file name (+ directory of course) should be unique by the problem description.For the actual test case provided by OJ, look at this example:
["root/qgjazhtliq/djmevsktisuvd/acsuolhnermqzok/mkts/ibrdqxawjgut/emb wl.txt(odumfqzwczehoyk) z.txt(gojsklotgchjzfm) txtoyg.txt(gojsklotgchjzfm) eupidhefx.txt(ahlsazuzrsf) rekzkaifwp.txt(yfxaymkefaofowqjpgaceffkjsehtmqkgy) l.txt(odumfqzwczehoyk) bqmhpxumxlbe.txt(yfxaymkefaofowqjpgaceffkjsehtmqkgy) qoqgiauqbayuc.txt(odumfqzwczehoyk) mpstemqlxy.txt(ahlsazuzrsf)"]The bolded file cause the problem.