The other thing you haven't considered is that files typically don't just exist on disk:
- They are copied across networks in various ways and under various circumstances.
- They are copied from one storage media to another, or even within a media.
Each time a file is copied, the bits could get corrupted ...
Now some of these representation or data movement schemes have (or can have) mechanisms to detect corruption. But this doesn't apply to all of them, and someone receiving a file cannot tell whether previous storage / movement schemes that touched the file do error detection. Also, you don't know how good the error detection is. For example, will it detect 2 bits flipped?
Therefore, if the file content warrants error detection, including error detection as part of the file format is a reasonable thing to do. (Indeed, if you don't then you ought to use some kind of external checksumming mechanism, independent of the file system's error detection, etcetera.)
The other thing to note is that while disks, networks, network protocols, file systems, RAM and so on often implement some kind of error detection, they don't always do this. And when they do, they tend to use a mechanism that is optimized for speed rather than high integrity. High integrity tends to be computationally expensive.
A file format where integrity matters cannot assume that something else is taking care of the problem.
(Then there is the issue that you may want / need to detect deliberate file tampering. For that you need something more than simple checksums or even (just) cryptohashes. You need something like digital signatures.)
TL;DR - checksums in file formats are not redundant.