It depends.
First, if you have a really big database spanning multiple machines, then everything (not just database update) is going to be pain. (no matter how much you planned ahead of time).
Second, updating a database is NOT just a database thing -- it also depends on the bigger system of which the DB is part. This also includes the database deployment (many database servers, multiple data-centers, master-slave setups, etc.)
The pain can be eased by architecturing you system components such that they all have some sort of 'cognizance' of the DB schema change event. This means that the entire system should be tolerant of schema changes and can respond to it in a 'sane' way.
You can check out a utility developed by Facebook for tackling MySQL schema updates.
Also, there are standard best practices like turning you master read-only, making changes to slaves or on development copy, etc.
In any case, having a full backup and extensive test suite is a MUST. Only then, can you do any changes confidently and safely.