I am adding this answer which is similar to the answer above, but is a slightly different way at looking at things, so it might help someone.
Suppose we take the number 1111 (in binary) and subtract 1 from it, we get 1110 & again subtracting 1, we get, 1101 and we see a pattern emerge, i.e. to subtract 1 from a binary number flip all the 0's to the right of the least significant '1' bit to 1 and then flip that '1' itself to a 0 (assuming that the most significant bit is the left most bit and the least significant bit is the rightmost bit).
So, suppose, we have v = 14, which in binary is 1110, then to get v-1, first flip the zero to the right of least significant 1 to '1', which gives us 1111 & then flip the least significant 1 itself to a zero, giving us 1101.
Now, we know that &ing the number 'v' and 'v-1' will preserve all the 1's to the left of the least significant 1 and the rest of the numbers will get '&ded' to zeroes therefore preserving 1 less than the number of bits actually set in the number, i.e. in the case of 1110, v & v-1 = 1100 which has 1 less bit set than its predecessor. This process happens for as many '1's there are as the number of bits set, the count of which we store in the count variable.