I am looking to perform a large overhaul on a complex simulation system that simulates several instances of several vehicle models in a classroom training environment. For example, 24 students may be running simulations on three different vehicles for maintenance and operation. Instructors will be required to have tablets that can connect to any of the 24 active simulations to control the training scenario.
The primary system will be running on Linux but there are no other requirements for OS and machine specs can be built as needed. Performance of each simulated pass must be able to consistently run at ~10ms intervals with a +-2ms tolerance.
A primary goal is to make this system very modular so that it can be extended and reused by other training facilities with unique vehicles and needs.
My thought was to use a layered architecture (system, business, UI). The definition of each vehicle model can be stored in a database and therefore edited independently by a superuser (modularity/extensibility of the vehicles). Each layer would likely have to read this database to dynamically allocate the resources that particular layer will require.
Originally I planned to use shared memory for the system layer, setting up permissions and authentication for any business layer to attempt to login. The primary simulation business logic would then continually update the vehicle details according to the active data. The instructor interface would have a business layer server that connects to all 24 clients and also logs into the system layer to modify simulation parameters. All inputs from the student and visual outputs would each have a business layer that can login to the shared memory system layer as well. All of these acting as separate applications so that they can be removed/added/extended as needed.
The problem then came when I realized that classes do not work well with shared memory. I would need to serialize every get/set of the shared memory into a flat memory structure. Having not worked with this architecture before, I am unsure if this plan will create a large performance hit. Technically I can dedicate some cores to the primary business layer logic that performs the simulation.
Would using shared memory with a suite of applications be an appropriate way to resolve this system? Would another cross-process communication type such as pipes be more advisable than shared memory? Would it be better to maintain the system and business logic into a single application and simply use mutexes and cross-threading to ensure performance? Am I going about this all wrong?