28

I was studying the scanning of old CRT screens and the interlacing strategy for video, and I started wondering something.

The raster scan process went top to bottom on odd lines, then back to top to raster the even lines. There is therefore a vertical blanking interval to send the electron beam back to the top position.

Why wasn't the initial design of CRT vertical scan made so that the vertical scan happened top to bottom on odd lines, and bottom to top on even lines, thus negating the need for the vertical blanking? It would of course require that the signal of the even lines to be reversed.

Stefano Borini
  • 947
  • 1
  • 10
  • 18
  • 2
    I'm not entirely sure your proposal would eliminate the need for vertical blanking (although maybe it'd reduce it?)... But perhaps someone who better understands how CRTs work can expand on that. – marcelm Aug 03 '19 at 09:37
  • 3
    excellent system-design question – analogsystemsrf Aug 03 '19 at 14:07
  • 1
    a similar idea, using triangular instead of sawtooth horizontal deflection would have the horizontal beam direction reversed every second scan line, eliminating the need for a horizontal flyback altogether. I guess the precision to get such a picture perfectly aligned was not possible back then. – dlatikay Aug 05 '19 at 15:09

6 Answers6

37

CRT interlacing was done to get the best balance between phosphor decay rate and refresh rate. Each phosphor dot has, in effect, an intensity half-life which determines its decay rate.

Without interlacing the half-life would have to be in the order of 1/25 seconds (Europe) and this would have a noticeable flicker as this is on the edge of human flicker detection. In addition the longer decay rate required would cause blur on picture motion. By interlacing in the way we do each zone of the screen is updated every 1/50 seconds. This reduces the flicker and allows a shorter decay phosphor to be used and this in turn reduces the motion blur.

To do as you suggest would result in a picture washing up and down the screen, alternating high and low intensity imaging at the top and bottom with reasonably even intensity in the middle. Non-interlaced would probably be better and less trouble.


Wikipedia's Interlaced Video states:

Interlaced video (also known as Interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.

The guys got it right when they interlaced it as they did.


Bonus:

See How a TV Works in Slow Motion by the Slow-Mo guys for some super analysis.

Transistor
  • 168,990
  • 12
  • 186
  • 385
  • 1
    Well, you would still have interlacing. Just that instead of going top->bottom, top-> bottom, you would go top -> bottom, bottom -> top. The vertical signal would be a triangular wave instead of a sawtooth. – Stefano Borini Aug 03 '19 at 18:29
  • 17
    @StefanoBorini No, at the top and bottom of the scan the screen would be refreshed twice in quick succession, followed by a gap while everything else was refreshed twice. You would only get effective interlacing in the middle of the screen. – alephzero Aug 03 '19 at 18:42
  • 8
    @StefanoBorini: Read my third paragraph again. I tried to explain the problem. For an even intensity with no flicker you need to refresh each area of the screen at equal intervals. Your system doesn't do that. – Transistor Aug 03 '19 at 21:23
  • 1
    Interlacing as it was done only "got it right" if the signal was band-limited appropriately in the vertical direction, e.g. by the optics. As it was done, the aliasing was/is nauseating. – R.. GitHub STOP HELPING ICE Aug 04 '19 at 02:51
  • 1
    My initial thought on reading the question was that getting upward and downward passes to be properly aligned would be impractical, but you actually hit upon an even bigger problem. If alignment weren't an issue, horizontal scanning could be done bidirectionally to reduce the required horizontal blanking interval if one pulsed the vertical deflection voltage each line, getting the alignment to work there would be a killer. – supercat Aug 04 '19 at 21:27
  • It's just occurred to me that the designers were probably leaving room for the future Teletext service. – Transistor Aug 04 '19 at 21:32
  • The old (2006 and prior) CRT (3 tubes, red, green, blue, no masks) based rear projection HDTV, such as Mitsubishi 65815, support both 480p and 1080i as native modes. I'm wondering what the supported refresh rate is for 480p, and how this would affect the choice of phosphor persistence. These HDTVs also support 480i, and I don't know if they deinterlace 480i (using memory?). – rcgldr Aug 05 '19 at 06:40
  • @rcgldr: I think 480p uses essentially the same vertical timing as 480i, but doubles the horizontal frequency. If one were to take a 480i source, squish the timing of each scan line by 50%, and overlay the squished output from exactly 262.5 lines periods earlier, I think that would yield a 480p signal. – supercat Aug 05 '19 at 21:29
  • @supercat - 480p - the p stands for progressive, or non-interlaced. For NTSC, it can be either 29.97 hz or 59.94 hz, and I was wondering which was the more common one, and/or if the CRT rear projection systems would support both. If the supported rate is 29.97 hz, then it seems the phosphor decay rate would be based on 29.97 hz (for both 480p and 1080i since theyre' the same phosphors). – rcgldr Aug 06 '19 at 00:36
  • @Transistor very true. I got it now. Thanks – Stefano Borini Aug 06 '19 at 10:38
  • @Stefano: See the link I've added to the Slow Mo Guys. – Transistor Aug 06 '19 at 18:40
19

It's worse than Transistor suggests ... the scanning waveform was generated by simple analog circuitry, and was a segment of an exponential waveform not a perfectly linear sawtooth waveform. So it would sag in the middle.

On a good TV it was reasonably linear, good enough for the errors not to be obvious. However if the retrace also carried picture information, you would see double images because the sag would place the central line below the centre while scanning down, ibut above it while scanning up. It would be rather obvious that the two copies weren't in the same place; you would see double images in the central part of the screen.

TV had to work with imperfect circuitry.

When colour came along, even under the ideal conditions of identical scanning circuits in the same direction, it was a big enough headache getting all the colours to line up correctly. Just mention a "convergence panel" to an old timer and watch him shudder. It was a circuit board packed full of interacting adjustments...

  • 2
    There's no reason to think that the waveform was exponential. Deflection in TVs is almost universally magnetic, and a very linear current ramp can be produced in the deflection coil by applying a square wave voltage to it. But you are correct about getting bidirectional scans to align and interleave correctly being a huge problem. It's much easier to use a unidirectional scan, which does the exact same thing each time. – Dave Tweed Aug 03 '19 at 15:45
  • @Dave Tweed Applying a step function voltage to an inductor produces an exponential current signal, correct? And applying a square wave is like applying a series of step functions, right? Are you just saying that the exponent of the current signal is negligibly small, or is there some non-linearity in the deflection coil's step response that affects the current signal? – Vaelus Aug 03 '19 at 23:19
  • 6
    Applying a constant voltage to an ideal inductor will produce a linear ramp in current. With a real inductor and a real voltage source there is stray resistance and capacitance which results in nonlinear behavior. – Peter Green Aug 04 '19 at 00:23
  • 3
    @Dave, there is a very good reason to think so ... the scanning waveform generators for a TV set (405 line) showing R-C relaxation oscillators using triodes.Seen in a valve databook dated August 1939... –  Aug 04 '19 at 09:42
  • Sure, they used RC timing circuits, but that doesn't mean that the capacitor voltage was amplified and applied directly to the deflection coil. My 1957 book clearly shows square waves at the deflection coil terminals. – Dave Tweed Aug 04 '19 at 11:17
  • @DaveTweed ah - I see now. Yes, by 1957, magnetic deflection had won, end of story. But in the early days, that wasn't a given; electrostatic deflection (and much longer tubes) was a contender. I don't have that data book at hand at the moment, but I believe it may have used electrostatic deflection. Certainly the fairly common post war hack involving a war-surplus "Gee" radar set was electrostatic. (A friend spent months carefully restoring one, in the 1980s, to watch the very last night of 405 line broadcasts on a 7 inch orange screen... –  Aug 04 '19 at 22:34
  • 2
    And in any case, "constant" voltage under load from 1950s circuits was probably rather optimistic. –  Aug 04 '19 at 22:35
3

Interesting but it would complicate electronics both on the camera and TV side, and only lines on the center of the screen is refreshed with equal time period and lines near top and bottom unevenly. It just is simpler and looks better this way.

Justme
  • 127,425
  • 3
  • 97
  • 261
3

CRTs have phosphors that decay in intensity comparatively fast in order to support the display of moving images (oscilloscope tubes and text terminals tended to use considerably slower phosphors). Motion pictures used 24 frames/second but did not have the decay issues: instead a mechanism moved to the next frame. Even then, 24Hz would have been a bit flickery, so the projectors interrupted the light not just when switching frames but one additional time in the middle, making the flicker frequency 48Hz.

TV mimicked the motion picture in transferring full image data at a rate of 24Hz (rounded up to half of the frequency of the AC power network) while "flickering" at double the rate. TV sets did not have any kind of storage (there is some delay line in color TVs but those came much later) so it could not just repeat the same image it stored without the image getting broadcast again (like 100Hz TV sets do it now). Instead the data needed to be sent a second time and it made better sense to make use of the bandwidth to actually send image lines interlaced for a better match of horizontal and vertical resolution.

It's actually a trick of timing for vertical and horizontal blanking that creates the interlaced display: the TV set electronics are not particularly catering to it (and could equally well display non-interlaced), it's a consequence of how vertical and horizontal blanking pulses are getting interspersed.

  • 1
    Early motion pictures often used faster frame rates; I would guess that was before someone discovered that flashing the image twice per frame would yield results that were almost as good as flashing individual images at the same rate, while using half as much film. – supercat Aug 05 '19 at 21:25
1

You would have ended up with significant flicker for one since you wouldn't fill up the frame at the same rate on the entire screen.

There was PAL, SECAM, and variants of NTSC and PAL. None of these would go top to bottom, then bottom to top. If you did this, you'd end up drawing the entire bottom and top of the screen and then it would be nearly 1/60th of a second before they are refreshed. The center of the screen would be refreshed in 1/30th of a second on average. You'd expect to see the worst flicker at the top and bottom of the frame as a result and the least in the center.

Fields in the display didn't only contain location information, but time information as well. Interlace was basically a hack to fit in more information without excessive bandwidth. You have to remember this standard was done in the mid 1950's. Pretty impressive for it's time, and they did a remarkable job, which is now all absolutely obsolete.

  • 3
    Welcome to EE.SE. I think you got your 1/60th and 1/30th back to front. – Transistor Aug 05 '19 at 22:12
  • I don't think so, the bottom two lines of the screen should be drawn, then redrawn back to back, it would be a full frame (two fields) before these two lines were redrawn again. The same would be true of the top two lines. – Jimminy Doe Aug 07 '19 at 00:57
1

The rapid retrace (from the yoke current reversal) produced the high voltage ( L di/dt) needed for the CRT. L is the horizontal deflection coil inductance.

John Reed
  • 11
  • 1