I can understand that a small group can split the job and everyone works on her own task. But how should this work with a larger group, spread over several places?
Well if you think about it, it works the same way that all distributed development projects work. The developers uses tools (e.g. distributed version control) and procedures that are tried and tested for this kind of situation.
And the evidence is that this approach really does work ... if done properly.
One thing that works in the favour of the "the code is the spec" approach for a file system, is that a Linux file system only requires a single master implementation. From the developers perspective, there is no need for multiple implementations of (say) BTRFS, and certainly no need for independent (e.g. clean-room) re-implementations. If you look at it from their point of view, there is no value to them in writing a spec, setting up a committee to manage changes to the spec, and constraining themselves to conforming to the spec.
jmoreno comments:
For something like a file system, I would expect a reference implementation and/or a test suite, which would be the documentation.
You could say that the master implementation is the reference implementation, and the master test suite is the reference test suite. The only issue is that the test suite will have been designed as a functionality test suite for the master (reference) implementation rather than as a compliance test suite for all possible implementations of a (hypothetical) spec.