Implement merge sort without extra memory


No. I doubt it's possible. I'd need at the very least an index variable or so.
If you meant O(1) extra memory, I think I vaguely remember seeing a version using an inplace buffer area of size sqrt(m) for the merging, where m is the number of elements to be merged. Meaning some sqrt(m) part of the merge range facilitates the merging of the rest, and then that sqrt(m) part gets sorted for example with bubble sort.

@agave, here is a O(n^2) solution. This is kind of hybrid between mergesort and insertion sort. The algorithm applies divide and conquer technique as well as mergesort does, but in merge part the algorithm shifts a part of the array in order to insert the element in its right place.
It is very uneffective.void merge(int[] nums, int l, int mid, int r) { int i = l; int j = mid; while (i < j && j < r) { if (nums[i] > nums[j]) { int temp = nums[j]; for (int k = j; k > i; k) { nums[k] = nums[k  1]; } j++; nums[i] = temp; } i++; } } void mergeSort(int[] nums, int l, int r) { if (l + 1 < r) { int mid = l + (r  l)/2; mergeSort(nums, l, mid); mergeSort(nums, mid, r); merge(nums,l,mid,r); } } mergeSort(nums, 0, nums.length) ;

@elmirap recursion should NOT be allowed. Merge sort must be implemented bottomup and then use an inplace merging strategy (like the one you did).

@agave said in Implement merge sort without extra memory:
recursion should NOT be allowed
Why not?
Also, that kind of recursion already is forbidden, as it uses Theta(log n) extra space.
But if there were a solution with recursion using only O(1) extra space, I don't see what speaks against it (other than your requirement to not use any extra memory, which I still believe to be mistaken).

@StefanPochmann cause it takes theta log n of extra stack space as you said. The guy wanted NO extra space whatsoever (sacrificing perhaps the nlogn bound, but he never said anything about guaranteeing nlogn bound).

@agave Not every recursion takes Theta(log n) space. Also, please clarify: Are you really really sure no extra space whatsoever is allowed? So I can't use any variables other than the input variable?
O(n log n) is a cornerstone of merge sort. I'd say without it, it's not really merge sort. Also, there are ways to do it in O(n log n) time and O(1) space, so you don't have the excuse that breaking O(n log n) is necessary.

@agave Here is bottom up implementation with noneffective merge, defined above
void mergeBottomUp(int[] nums) { int n = nums.length; int buckets = n; int numBucket = 1; while (buckets > 1) { int i = 0; while(i < nums.length  numBucket) { int r = Math.min(n, 2 * numBucket + i); merge(nums, i, i + numBucket, r); i = r; buckets; } numBucket *= 2; } }


I vaguely remember this being asked of me. In my case, recursion as allowed (they didn't count the extra stack space) and extra O(1) memory was allowed. All they wanted to see was whether I could do without making extra arrays linear in size of the merge array. IIRC, I gave the naive repeated shift answer.
Since the blocks that need to be merged are essentially contiguous, I figured there might be a way to do it with repeated swaps. Something like this (probably doesn't work, haven't properly tested):
void merge(int nums[], int l, int r) { int i = l, j = l + (rl)/2+1; while(i < j && j <= r) { if(nums[i] > nums[j]) swap(nums[i], nums[j]); else if(nums[i] == nums[j]) { swap(nums[i+1], nums[j]); j++; } i++; } }
Anyone care to suggest improvements or maybe totally discard this approach?

@babhishek21 merging without extra buffer in O(n) time is very complicated. There are research papers on the topic. I don't really think the guy expected us to write a research paper in 45 minutes... Probably he just wanted to see the bottomup approach.

@IWantToPass yeah. started with "implement Fibonacci" and then moved on to this thing...

@agave Crap. Hopefully this is the kind of question that the hiring commitee doesn't take seriously lol.

@IWantToPass I was asked this question maybe because I was doing well. But I don't think this question makes sense after all...

@agave Wait so what solution did you provide? I honestly doubt I would even be able to come up with an answer to this, it seems pretty arcane..lol..

@IWantToPass Implemented a merge function that was supposedly working... The guy was ok with it but it was not working. During the interview I didn't realize the problem was much more difficult.

So I fixed my previous snippet a bit (the one where I did repeated swaps). And as it turns out, I couldn't do better than
O(n**2)
average case. The code is:void fixinvariant(int nums[], int j, int r) { // place the larger number in its correct place while(j < r && nums[j] > nums[j+1]) { swap(nums[j], nums[j+1]); j++; } } void merge(int nums[], int l, int r) { int i = l, j = l + (rl)/2+1, k; while(i < j) { if(nums[i] > nums[j]) { // swap to get smaller in correct place swap(nums[i], nums[j]); fixinvariant(nums, j, r); } else if(nums[i] == nums[j]) { // swap to get the smaller equal number in place swap(nums[i+1], nums[j]); fixinvariant(nums, j, r); } i++; } }
What did I try to do? Well, I tried to simulate the two pointer approach of a normal linear merge. Except whenever I found an inversion, I did a swap of the two pointers' contents. This ensured that my first pointer always had the least number of the two pointers' contents.
Why did it not work? When I made the swap, a element of the first half was taking place of a element in the second half. Before the first swap, each of the halves were guaranteed to maintain the invariant (that they are already sorted). After the sort, there could be no such guarantee in the remaining of the second half (which happens to be the whole of the second half thoughout).
What did I do to fix that? I made sure that I maintained the invariant in the second half by percolating the number to its correct place. This incurred an additional
O(n)
cost. The overall merge operation is no longer linear.