tl;dr - I want to stream text data from one writer process to multiple reader processes. I'm thinking of using a file to achieve this. Is it a good idea?
Using a file would avoid having to maintain client connections in the server and send data on each of those connections. The file would have a single writer which would always append and multiple readers. Data should not be lost.
I have multiple questions:
- In general, is this a good idea?
- If not, what do you recommend for 1 to many IPC?
- Do I need file locking if the writer is only ever appending?
- What about performance? I assume the OS is doing clever stuff by using memory as a cache, so it shouldn't be that bad. Or do you recommend using an in-memory file?
Here's the more detailed context: I have a typechecker written as a daemon process, which keeps checking the codebase whenever it detects changes. To obtain the errors, the user can invoke a client process on the command line to talk to the server. Currently, the client has to wait until the server has finished typechecking everything to receive the errors. Errors would be passed to clients via a monitor process which maintains connections from clients. I would like to change that and make the errors available to clients as soon as they are found. Streaming errors via the monitor is difficult with the current code. Besides, I think the file solution is simpler. Whenever the server would detect a change, it would unlink the file (clients currently reading it could finish do so safely in Unix) and create a fresh one.