All memory devices at every level of the memory hierarchy (from L1 cache, main memory, disk...) offer sequential access as a faster mode compared to random access. Random access requires constant transmission of addresses (which means devices have to reconfigure their access for changing addresses) whereas sequential access means bulk transfers: amortizing one address across transfer of whole block, while the devices are internally advancing addressing to the next, in parallel with access of the prior address.
Disk takes that to extreme as @gnasher729 is pointing out. Due to the physical media, byte addressing is not even practical. On disk, there is fixed-sized overhead surrounding any amount of content you might store. During low-level formatting by the manufacturer, a choice is made of the size of content that is made into a block. The choice is a trade off between advantages of sequential bulk transfers when you want more data, and the disadvantages when you don't, as well as amortizing overhead over content.
In practice, if the operating system is working with a given file, then it will only do read-modify-write when the page is cold, whereas thereafter for a time it will cache the latest content of that page. If the page is found in the cache, then the initial read can be forgone.