# Python o(n) solution that uses a dictionary.

• ``````class Solution:
# @return an integer
def lengthOfLongestSubstring(self, s):
maxlength=0
templength=0
charused={}
start=0
for i in range(len(s)):
if s[i] not in charused or charused[s[i]]<start:
charused[s[i]]=i
templength+=1
maxlength=max(maxlength,templength)
elif s[i] in charused:
start=charused[s[i]]+1
charused[s[i]]=i
templength=i-start+1
maxlength=max(maxlength,templength)

return maxlength
``````

the current substring is the longest substring without duplicate that ends at i, the current visit element .
by moving forward i, i=i+1, we scan the entire string s.

After moving i forward, If we find a duplicate in the current substring, we look up the dictionary to return the duplicate position, and recalculates the substring length. Also it moves the start of the current substring. The invariant here is:

the current substring has no duplicate.

By recording the length of all "current" substrings and find the max, we have the max substring without duplicates.
The hidden argument here is, the max length substring , call it s*, must be a "current" substring, when we visit the last element of s*.

• I do not think it's O(n). The best idea of using dict is that searching an element costs O(1) time. But, when you insert an element into a dict, it will take O(log k) time, where k is the number of elements in the dict. In the worst case, the longest substring is the whole string, the time complexity is
log(1) + log(2) + log(3) + ... + log(n-1) = O(nlogn)

• there are many other algorithms involving dynamically updating the dictionary (Like 2 sum, 3 sum or so) and they claim O(n)...

• ``````def lengthOfLongestSubstring(self, s):
previous_len = 0
len_max = 0
char_dict = {}
for (i,c) in enumerate(s):
if c in char_dict.keys() and i - char_dict[c] <= previous_len:
previous_len = i - char_dict[c]
else:
previous_len += 1
if previous_len > len_max: len_max = previous_len
char_dict[c] = i
return len_max
``````

Similar as yours.

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.