CRT interlacing was done to get the best balance between phosphor decay rate and refresh rate. Each phosphor dot has, in effect, an intensity half-life which determines its decay rate.
Without interlacing the half-life would have to be in the order of 1/25 seconds (Europe) and this would have a noticeable flicker as this is on the edge of human flicker detection. In addition the longer decay rate required would cause blur on picture motion. By interlacing in the way we do each zone of the screen is updated every 1/50 seconds. This reduces the flicker and allows a shorter decay phosphor to be used and this in turn reduces the motion blur.
To do as you suggest would result in a picture washing up and down the screen, alternating high and low intensity imaging at the top and bottom with reasonably even intensity in the middle. Non-interlaced would probably be better and less trouble.
Wikipedia's Interlaced Video states:
Interlaced video (also known as Interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.
The guys got it right when they interlaced it as they did.
Bonus:
See How a TV Works in Slow Motion by the Slow-Mo guys for some super analysis.