# Is there better solution for this word breakII

• The output scope can be up to n power of n, so I don't think there would be a solution running in polynomial time for any worse case.

• Yes, there are ways to make it run in polynomial time for any worse case as long as you use dynamic program.
Actually, What I did was to break the target into two tasks:

1. Judge whether the substring could be broke or not(using dynamic ways, time complexity is O(n*n)).

2. Iteratively break it and add them to final result.
Here my snippet of code:

public ArrayList<String> wordBreak(String s, Set<String> dict) {
// Note: The Solution object is instantiated only once and is reused by each test case.
ArrayList<String > result=new ArrayList<String>();
boolean [][] table=new boolean[s.length()][s.length()];
wordBreak(table, s, dict);//this is initial the table
if(!table[0][s.length()-1]){
return result;
}
return result;
}

``````public void recurseAdd(int start,String input,ArrayList<String> result, LinkedList<String> forNow,boolean table[][],Set<String> dict){
if(start==input.length()){
String newOne="";
for(int i=forNow.size()-1;i>=0;i--){
newOne+=forNow.get(i)+" ";
}
return;
}
for(int i=1+start;i<=input.length();i++){
if(dict.contains(input.substring(start,i))){//valid
if(i<input.length()){
if(table[i][input.length()-1]){
forNow.push(input.substring(start,i)); //push shi segment into stack;
recurseAdd(i, input, result, forNow, table, dict);
forNow.pop();
}
}
else{
forNow.push(input.substring(start,i)); //push shi segment into stack;
recurseAdd(i, input, result, forNow, table, dict);
forNow.pop();
}

}
}

}

public void wordBreak(boolean [][] dp, String s, Set<String> dict) { //fill the table to judge any substring if it could be broke or not
// Note: The Solution object is instantiated only once and is reused by each test case.
for (int i = 0; i < s.length(); i++)
dp[i][i]=dict.contains(s.substring(i,i+1));
for(int i=1;i<s.length();i++){
for(int j=0;j<s.length()-i;j++){
//boolean[j][j+i]
if(dict.contains(s.substring(j,j+i+1)))
dp[j][j+i]=true;
else{
for(int k=j;k<j+i;k++){
if(dp[j][k]&&dp[k+1][j+i]){
dp[j][j+i]=true;
break;
}
}
}
}
}
}```````

• nice code. But still, it is not polynomial time for the worst case. You can test it on your own machine, you put 100 a's together as input string and the dictionary contains 10 words which consist of 1 to 10 a's respectively(ie,"a" , "aa"....), then you'll see it takes forever to finish. remember,the output scope can be up to n power of n, we don't expect polynomial time for those situations.

• Use Dynamic Programing, to calculate all prossible former word's last index on each index. vector<int> indexes[s.length()];
Then, you can easily find out all solutions by Recursive traversal on the array indexes.

• This post is deleted!

• Hi fengse8643. Have you read the FAQ http://oj.leetcode.com/discuss/faq/ ? Only code is now allowed. If you want to share your solution, please describe your algorithm in some words and also add some comments in your code to help others understand it. Thanks!

• here is my solution for this question, I use the concept of dynamic programming, it helps to remember the the starting index from which there is no valid word break(I use a set here) and also I create a hash map<int, vector<int>> to remember the starting index that has the word break , the vector<int> here represents the end index corresponding to the starting index. thus every time when we want to find the word break from the current index, we need to make sure that the starting index may have a chance to have a valid word break(which means that the index cannot be found in the set), and also in the hashmap has the index or not, if it has, then we don't have to iterate through the following index one by one.

``````bool wordbreak(vector<string> &v, string s, string sentence, unordered_map<int, vector<int>> &found, unordered_set<int> &notfound, int begin, unordered_set<string> &dict){
if(begin==s.length()){ v.push_back(sentence); return true;}
if(notfound.find(begin)!=notfound.end()){ return false;}
else{
if(found.find(begin)!=found.end()){
for(int i=0;i<found[begin].size();i++){
wordbreak(v,s,sentence+" "+s.substr(begin,found[begin][i]-begin+1),found,notfound,found[begin][i]+1,dict);
}
return true;
}else{
bool flag=false;
for(int i=begin;i<s.length();i++){
string word=s.substr(begin,i-begin+1);
if(dict.find(word)!=dict.end()){
string input=sentence==""?word:sentence+" "+word;
if(wordbreak(v,s,input,found,notfound,i+1,dict)){
flag=true;
if(found.find(begin)==found.end()){ vector<int> r; found[begin]=r;}
found[begin].push_back(i);
}else{
notfound.insert(i+1);
}
}
}
if(!flag){ notfound.insert(begin); return false;}
return true;
}

}
}
vector<string> wordBreak(string s, unordered_set<string> &dict) {
unordered_map<int, vector<int>> found;
unordered_set<int> notfound;
vector<string> result;
wordbreak(result,s,"",found,notfound,0,dict);
return result;

}
``````

• From right to left, use DP to save next valid starting indexes for current index. Then backtrace from the beginning.

``````void collect(vector<list<int>>& mark, int ind, const string& s,
string path, vector<string>& result){
for(auto& stop: mark[ind]){
string sub=s.substr(ind,stop-ind);
string newpath=path+(ind==0?sub:" "+sub);
if(stop==s.length()) result.push_back(newpath);
else collect(mark,stop,s,newpath,result);
}
}
vector<string> wordBreak(string s, unordered_set<string> &dict) {
vector<list<int>> mark(s.length(),list<int>());
for(int stop=s.length();stop>=0;stop--){
if(stop<s.length()&&mark[stop].empty())continue;
for(int start=stop-1;start>=0;start--)
if(dict.count(s.substr(start,stop-start)))
mark[start].push_back(stop);
}
vector<string> result;
collect(mark,0,s,"",result);
return result;
}``````

• Based on Word Break, add one more DP to record all possible partitions at current index

``````class Solution {
public:
vector<string> wordBreak(string s, unordered_set<string> &dict) {
// Note: The Solution object is instantiated only once and is reused by each test case.
int len = s.length();
vector<bool> dp(len + 1,false);
vector<vector<string> > dp2(len + 1,vector<string>());
dp2[len].push_back("");
dp[len] = true;

for (int i = len - 1; i >= 0; i--) {
for (int j = i; j < len; j++) {
string str = s.substr(i,j - i + 1);
if (dict.find(str) != dict.end() && dp[j + 1]) {
dp[i] = true;
for (int index = 0; index < dp2[j + 1].size(); index++) {
string empty = "";
if (j + 1 != len) empty = " ";
string n = str + empty + dp2[j + 1][index];
dp2[i].push_back(n);
}
}
}
}
return dp2[0];
}
};``````

• nice shot. `collect` part is like a DFS on a tree that prints all the paths from root to leaves. I just wonder is there any iterative (a.k.a. non-recursive) way to do it (I used some dirty hacks on stack but it's long and ugly).

• This is a memoization based approach. I cache every parse failure of string in a set and lookup that set for each function call.

If the input string has one-to-one mapping to dictionary words, then it is linear in time complexity.

{
public class Solution {

``````HashSet<String> map = null;

public ArrayList<String> wordBreak(String s, Set<String> dict) {
ArrayList<String> res = new ArrayList<String>();
if(s == null || s.length() == 0)
return res;

map = new HashSet<String>();
res = recBreak(s, dict);

return (res != null)?res:new ArrayList<String>();
}

private ArrayList<String> recBreak(String s, Set<String> dict){
if(map.contains(s))
return null;

ArrayList<String> list = new ArrayList<String>();

ArrayList<String> temp = null;

for(int i =1; i<= s.length(); i++){
temp = new ArrayList<String>();
String str = s.substring(0, i);
if(dict.contains(str)){
if(i < s.length())
temp = recBreak(s.substring(i, s.length()), dict);
if(temp != null){
if(temp.size() == 0)
else{
for(int j = 0; j< temp.size(); j++){
}
}
}else{
}
}
}

return (list.size() > 0)?list:null;

}
``````

}}

• I process the input string in time O(mn) where m = length of longest word, n = length of string. For this, I process the input string in incremental suffixes. I use a trie containing the dictionary words in order to process prefixes of each suffix optimally. After this is completed, I get a map containing indices of locations in the string where the dictionary words can be located. The words at each index are stored as a linked list as multiple words can start at an index. In addition, each word is added only if it can connect to another index in the string which has words or if it can end the string.

After this preprocessing, you essentially get a DAG in which you depth first traverse (without ignoring visited nodes) all paths which start with one of the nodes stored in the map at index 0. This may not be clear from the code.

``````public class Solution {
public class TrieNode {
Character c;
boolean endWord;
String word;
Map<Character, TrieNode> nodes = new LinkedHashMap<Character, TrieNode>();

public TrieNode(char c) {
this.c = Character.valueOf(c);
}

public TrieNode(String s) {
if (s != null) {
Character c = Character.valueOf(s.charAt(s.length()-1));
this.c = c;
this.endWord = true;
this.word = s;
}
}

public TrieNode get(Character c) {
return nodes.get(c);
}

public void put(Character c, TrieNode node) {
nodes.put(c, node);
}
}

public class Trie {
TrieNode root = new TrieNode(null);

public void insertWord(String s) {
//System.out.println("Inserting : " + s);
TrieNode cur = root;
for (int i = 0; i < s.length(); i++) {
Character c = Character.valueOf(s.charAt(i));
//System.out.println("Inserting character : " + c);
TrieNode cnode = cur.get(c);
if (cnode == null) {
//System.out.println("New node for character : " + c + " under : "
//		+ cur.c == null ? "root" : cur.c);
cnode = new TrieNode(c);
cur.put(c, cnode);
} else {
//System.out.println("Found node for character : " + c + " under : "
//		+ cur.c == null ? "root" : cur.c);
}
if (i == s.length() - 1) {
cnode.endWord = true;
cnode.word = s;
//System.out.println("Word node for : " + s);
}
cur = cnode;
}
}
}

class Word {
String word;
Word next;
}

public ArrayList<String> wordBreak(String s, Set<String> dict) {
Trie t = new Trie();
int maxLenWord = 0;
boolean endReachable = false;

// 1. construct trie with dictionary words
for (String word : dict) {
if (word.length() > maxLenWord) maxLenWord = word.length();
t.insertWord(word);
}

// we store the words which start at various indices in this map
// if multiple words can start at 1 index, they are stored in a linked
// list at that location in the map
Map<Integer, Word> indexWordMap = new HashMap<Integer, Word>();

// 2. go thru string, consider suffixes of increasing length
for (int j = s.length() - 1; j >= 0; j--) {
int k = j;
TrieNode n = t.root;
while(n != null && !n.nodes.isEmpty() && k < s.length()) {
// 3. traverse the trie using a prefix of this particular
//    suffix and see if you can reach words which end in
//    indices which have words starting from them
n = n.get(Character.valueOf(s.charAt(k)));
if (n != null && n.endWord && (j + n.word.length() == s.length()
|| indexWordMap.get(j + n.word.length()) != null)) {
Word w = new Word();
w.word = n.word;
w.next = indexWordMap.get(j);
indexWordMap.put(j, w);
//System.out.println("Adding " + w.word + " to pos : " + j);
}
if (n != null) {
k++;
if (k >= s.length()) {
endReachable = true;
}
}
}
if (!endReachable && j < s.length() - maxLenWord) {
// leetcode oj doesn't accept null, balls...
return new ArrayList<String>();
}
}

// now we have a graph which can be traversed from start (0)
// to the last node to get a sentence.
// do a depth first traversal with no visited node check to
// print out all sentences
getSentences(s.length(), indexWordMap, 0, ll, sentences);
return new ArrayList<String>(sentences);
}

private void getSentences(int length, Map<Integer, Word> words, int index
, LinkedList<String> sentence, List<String> sentences) {

if (index == length) {
// go thru entire list and print words
int i = 0;
StringBuilder sb = new StringBuilder();
for (String s : sentence) {
if (i != 0) sb.append(' ');
sb.append(s);
i++;
}
//System.out.println();
} else {
Word w = words.get(index);
while (w != null) {
//System.out.println("Word at position : " + index + " | " + w.word);
getSentences(length, words, index + w.word.length(), sentence, sentences);
sentence.pollLast();
w = w.next;
}
}
return;
}
}``````

• You are right, there is no polynomial time algorithm to this since in worst case the output itself is exponential.

People here use DP to speed it up, but unfortunately DP cannot make it polynomial.

• really nice and neat solution, got confused by `if(stop<s.length()&&mark[stop].empty())continue;` util I found that when the iteration starts, stop==s.length(), so it will not enter this logic for the first time, then Aha!

• hi, jackyz ! I use your solution but return TLE . I want to know what is your feedback.

• Oh,I know the problom, "if(stop<s.length()&&mark[stop].empty())continue;" is very important!

• very impressive, especially in the building of mark.

• This post is deleted!

• I have the same idea but when i implement by java and generate result by stack (not recursive),it becomes slow and out out of memory when aaaaaaaaaaaaaaaaa...

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.