The thing you have to remember is that the purpose of sprints is to help you set and hit deadlines.
Essentially you are trying to work out how quickly you can program 'features'. Then you can take your list of features, multiply it by this 'velocity' and say "we should be done by this date, therefore the cost of the software is X"
AT the start of a sprint, you take a bunch of features and say, "yeah I recon we can do these in one sprint". At the end of the sprint you say "hmm looks like we were off by X% on our estimates! better factor that in next time and tell the customer we might be late!"
The reason you group things up into sprint rather than each individual task, is because there is a lot of interplay between features when you program them. Its easier to lump them together when you program and estimate.
If you add or remove stuff from a sprint you throw your numbers off and push back the dead line. Sometimes something will be important enough that you have to take the hit. But you should avoid it if possible.
Some people do leave x% free for unplanned stuff, but in my view it doesn't really work. You are just adding a fiddle factor into your estimate calculation. "Oh we didnt get it all done, but that was because there were more bugs than expected." well was it? you just cant tell really.
The worst thing that can happen is that you constantly add and remove things from the sprint in progress, so you never really know whether your estimates are any good. When the deadline comes around you find that you have loads of the originally planned features still to do.
Its much easier to leave all those bugs until then end and have a limited known set of bugs to clear up. At least you will only over run by a known amount.
If you are worried about the delay between finding and fixing a bug, and it can be a long one. The the best approach is to do shorter sprints. A 2 week sprint means 4 weeks before a bug fix is released. A 1 week sprint means 2 weeks. its a big difference in turn around.