You need to analyze your code looking for "turnarounds". A turnaround is a case where forward progress is blocked until data can be received from the other side. Each turnaround adds a penalty to performance that scales with the latency. So if you have 12 turnarounds, 100ms more latency means 1.2 seconds of wait. If you can drop the turnarounds to 4, 100ms additional latency only means an extra .4 seconds of wait.
Some turnarounds are explicit. If you send a request and wait for a reply, that's a turnaround. If you need to see that reply to decide what the next request will be, that's a turnaround.
Some turnarounds are implicit. If you open a TCP connection and wait to determine if the opening succeeded, that's a turnaround. If you close a TCP connection and wait for the close to complete with no errors, that's a turnaround.
Finding and measuring turnarounds is a bit of a black art. But one trick you can use is a latency simulator (Linux can do this with netem). If you can add logging with timestamping or otherwise tell what's going on, you can add a 4 second latency simulator (a device that adds 4 seconds of delay to every packet passing through it). Each 4 seconds you spend waiting for something is turnaround. You can count them, and you can also see where in your operating process they fall.
Code that's never been optimized for turnarounds frequently has a large number of needless turnarounds. They're typically invisible in testing environments because those environments typically have the client and server on the same LAN or, at worst, very close on a fast WAN.
There are four basic ways to eliminate a turnaround. One is pipelining -- this basically just means you don't wait for a reply if you don't have to. You can come back and check results later. Another is consolidation -- if you currently need two requests to do some higher-level operation and you reduce it to one request, that's a turnaround gone. The third way is concurrency -- if you can do more than one thing at a time, that's a turnaround gone. The last is prefetching -- if you know a web page element you're about to load is going to need to load the contents of another URL, you can start retrieving that URL while you're loading the web page that needs it.