The application we have in mind is a relay server. The aim is to accept large numbers of incoming socket connections (at least thousands) that will stay open for lengthy periods (hours or maybe days). They will exchange modest quantities of data and they need reasonably low latency.
The technical design is straightfoward and we have a test implementation. Our preference is to use Windows hosting and .NET, because that's the technology we know. However, this kind of usage is well outside what we are familiar with.
The question is whether there are specific limits or constraints to be aware of that are inherent in or common to software that does this, and that we should allow for in our design and/or test for before a roll-out.
I found this question (Handling large amounts of sockets) and this link (http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-1), which tend to suggest that our solution should work.
--
Commenters have suggested opening and closing ports, or using some kind of protocol, without suggesting how. The problem is that at any time a message might be relayed to an individual known destination with the expectation that it be received quickly (say 1 second at most, preferably sooner). The destination is (in general) behind a firewall with unknown properties, so cannot (in general) run a server or accept incoming connections. It can only make outgoing connections. We think it needs a persistent outgoing connection in order to receive packets at any time without notice. Alternative suggestions would be of interest, although strictly off topic for this question.
Other commenters have suggested there are OS limits, but not specified any. Assuming that this is some version of Windows server and some (perhaps large) amount of memory, are the limits likely to be a problem? Roughly what will they be? Is Windows a bad idea?