Currently, my thoughts are that GET requests would be feasible by using the concept of screen scraping combined with a cron job that runs at a set interval to scrape data from the GUI and sync to my own database.
However, I'm not quite sure how I would handle actions that seek to mutate the database that sits behind the GUI. I am quite certain I would need to interface directly with the GUI, but what tools are available that could help automate this by programmatically controlling the GUI?
Also, since an overall architecture such as this is far from conventional, I'm curious what strategies might be utilized to help scale a system such as this.
Note: It is acceptable for data returned from a GET request to be stale for at least as long as the cron job interval, and for POSTs and PUTs and the like to complete sometime in the future, let's say half an hour.
Note: Maybe my train of thought is completely idiotic and there's a better angle. I'd love to know.