I think there are two main reasons.
The first reason is that the cost to develop and prototype a microcontroller is huge -- so much so that low-end members of a product line are usually the same hardware as the high-end models, just with certain features disabled or not specified. Most of the time it's cheaper to make one chip that targets many markets instead of many chips that each target a small market.
The second reason is that microcontrollers are designed for control, not processing. Usually the thing you're controlling will have some analog sensing capability and some analog control mechanism. The sensors can connect to the ADCs or (if they have built-in electronics) they can communicate via SPI or I2C. The control mechanism could be driven by a DAC or a PWM. A popular paradigm in control systems, distributed control, involves many small control systems communicating with each other. That's where a longer-range protocol like CAN comes in handy. Finally, timers and external interrupts are useful for all kinds of stuff, and even general-purpose computers rely on them heavily.
External components would cost more, take up more board space, and be less convenient from a logistics standpoint. You would also have to worry about compatibility and reliability. System integration is not trivial. Emulating these functions in software (aka "bit-banging") is usually not feasible since it would require a much faster processor, especially if you want to use more than one function at once. That means more cost and more power consumption. Imagine trying to create a 1 MHz PWM function on a 50 MHz CPU, for instance. After the overhead from the timer interrupt and conditional branches, you'd be lucky to have more than a couple dozen cycles left over to do all the work!
If you want a chip that's more focused on processing power, try a DSP or an actual microprocessor.