class Solution {
public:
vector<int> countBits(int num) {
vector<int> ret(num+1, 0);
for (int i = 1; i <= num; ++i)
ret[i] = ret[i&(i1)] + 1;
return ret;
}
};
Four lines, C++, time O(n), space O(n)

i&(i1) drops the lowest set bit. For example: i = 14, its binary representation is 1110, so i1 is 1101, i&(i1) = 1100, the number of "1" in its binary representation decrease one, so ret[i] = ret[i&(i1)] + 1. (Sorry, my English is so poor and I have tried my best to make it clearly, I hope you can understand my explanation)


Just want to add to fengcc's explanation, as it still took me sometime to think it through. So when you remove the lowest set bit (or the rightmost nonzero bit), you will arrive at a smaller number than the current number: 14 ==> 12. Since your computed series already includes all the counts for n=0,1,2,...,12,13, you will most certainly already have the count for the number 12. And as fengcc pointed out, the only difference in count between 14, and 12 is 1 bit, so you add 1 to the count for 12 and get the correct count for 14.

@fengcc Nice solution! I never thought about the iteration to solve the problem with for;

it's very great! after read @fengcc explanation and think it through ,I write it for myself :
It is need to return an array, which every item value is the numbers '1' in binary of the item's sequence No. . As i&(i1) decrease one '1', so array[i]  1= array[i&(i1)] , then array[i] = array[i&(i1)] + 1.

@asbear yeah, same question here, the code is definitely elegant, but I don't know why it works.