As is so often the case in computing the answer is (a) because of historical circumstance and the need to maintain backwards compatibility and (b) because some methods are better suited to some tasks than others.
On (a) you need to remember that the "Winchester drive" - I am just about old enough to remember them being called that - (what the rest of the world calls a 'hard drive') has only been around for about half the time of electronic computing and even then it has not been accessible to most users for even that long for cost reasons. The FAT file system worked well on floppy disks and also on the original small hard drives as it was reasonably efficient and required low overhead. Once it started to be used - and its use spread widely because it is simple to implement - manufacturers could not tell its users that their old data was suddenly invalid.
Similarly, for Linux users, say, a stable NTFS driver was a long time coming, so keeping devices formatted as FAT meant they could be read and written across multiple systems.
On (b) - think of the differences between a system that, say, stores, billions of text-based database records and one that stores DVD-length media files. For the databse each record could be very small - perhaps just 30 or 40 bytes and certainly a filesystem that allocated a whole 'segment' (however you want to define that) of disk is likely to be wasteful of diskspace. Not so with the DVDs - bigger 'segments' (within reason, obviously) are likely to be highly efficient in space terms.
So different filesystems are designed for different purposes.