I'm designing a custom web-based inventory management and workflow system for a client of mine. One of their remote sites has fairly sketchy internet access, and needs to have no (or very little) interruption of business when their connection to the universe goes down.
I'm picturing some sort of local installation of the web app with MySQL replication keeping their local DB up to date. When their connection out of the building fails, they can manually kick over to the local URL, which is configured to hit their local MySQL, and they stay in business.
Question is, replicating those changes back up to the real master. I know of such a thing as master-master replication, but will that really hot-sync divergent tables back together when the connection comes back up?
It's some help that the remote site's use case is fairly unique, and it's likely (though not guaranteed) to write only to tables related to their wing of the business. I could perhaps limit the "failover mode" local application to only those pieces of the app that are unique to their location, which would be enough to keep them in business.
How would you approach this?