Very generally speaking -- the clock-speed of the device will directly drive it's performance (operations per second). Chips are designed (and extensively verified) for correct operation at some frequency determined by the manufacturer. Increasing the clock speed (overclocking) will increase performance, but potentially at the risk of instability / incorrect operation as it is running outside of manufacturer parameters. Often, this is not an issue for most users as they happen to never exercise the path in the chip that fails at the excess clock-speed, or they happened to get lucky with the specific die they got, and it can in fact operate stably at a higher frequency (process variation from die to die is a very real thing).
Increasing the voltage of a device on its own won't cause it to operate "faster" in the sense that you'll see more operations per second. Increasing it in conjunction with an overclock may increase the stability of said overclock by allowing the transistors to operate faster (shorter rise/fall times, if I remember correctly) -- at the expense of increased heat dissipation, and potential damage to the chip if the voltage is raised too high (modern chips often hover close to 1V these days); even 100-200mV can kill the chip / cause premature death (electromigration). By increasing transistor performance, you're allowing them to perform "better" to support the increased demands from the increased clock rate (shortened setup and hold times).
Summary -- most people over-voltage their devices to support an overclock.
Extra: In some occasions, increasing the voltage very slightly can help with stability at stock clocks, but you've arguably been sold a marginal device at that point. I have a habit (with honestly no real scientific backing) of bumping my DRAM voltage up by 50mV when I have 4 sticks installed -- for my small sample size, it seems to help stability wise supporting overclocking of the CPU (while keeping the stock DRAM frequency).