Grids are not synchronised by setting the generators to a specific phase, they are self synchronised by the fact that the generators can only run within a small phase difference of their local connection to the grid.
A generator which is powered enough to overcome its own losses will settle down to be exactly in phase with its local connection. If its driving power is now increased, it will speed up for a moment, advancing its phase, and exporting power into the grid. It will settle down to be slightly in advance of its local connection, where the power it exports into the grid is equal to its input power.
The phase across the grid will be determined by power flows as well as distance. A normal grid will have power being injected at multiple discrete points, and withdrawn (to a first approximation) more or less uniformly over the distribution area.
In your simple illustration, if plant B was supplying both loads, and then plant A came on stream, plant A would initially synchronise to the 1000km delayed version of B's phase. Once A started supplying significant power however, the A-L1 phase shift would be modified by the A-L1 power flow.
As it's power flow that controls phase shift, there are problems when a grid becomes weakened by connections going offline in an unplanned way, say due to storm damage. As only a certain amount of power can flow through a connection, only a certain phase shift can be controlled. If the remaining connections between two regions of the grid cannot support the power flow required to keep them synchronised, then the connections may trip out, and the regions become 'islanded'. Without the self-synchronisation, it can take a long time to re-establish the grid connections.