For faster search,i use value as key and in case of duplicate values, I use defaultdict to simulate miltimap like in c++. I use min(seems to be linear search) to find min value each iteration. one runtime is 468 ms. its complexity is O(N * k), where k is the number of lists, N the total number of nodes in all lists.
i change linear search with min heap(o(1) in to find min), its complexity should be O(N*logk).but one runtime is 1172 ms, much slower.
So I 'm wondering if the reason of longer runtime is because the heap method cost much to insert O(logk) and delete a new node also O(logk) or just because the python's heapq is slow while c++'s priority queue is fast, or some other reasons.
class Solution: # @param a list of ListNode # @return a ListNode def mergeKLists(self, lists): if not lists: return None ll = collections.defaultdict(list) head = ListNode(-1) for i in lists: if i: ll[i.val].append(i) cur = head while ll: small = min(ll) node = ll[small].pop() if ll[small] == : del ll[small] cur.next = node cur = node if node.next == None: continue ll[node.next.val].append(node.next) return head.next
the slower one
class Solution: # @param a list of ListNode # @return a ListNode def mergeKLists(self, lists): if not lists: return None ll =  for node in lists: if node: ll.append((node.val, node)) head = ListNode(-1) heapq.heapify(ll) cur = head while ll: small, node = heapq.heappop(ll) cur.next = node cur = node if node.next == None: continue heapq.heappush(ll,(node.next.val, node.next)) return head.next