A logic analyzer generally has two modes of operation, depending on where its sampling clock is coming from. The clock can be coming from your circuit, in which case, the samples are synchronized to the operation of the circuit, or it can be generated internally by the logic analyzer itself, which means that the samples are not synchronized. I tend to refer to these as "synchronous" and "asynchronous" operation, respectively, but Agilent is using the terms "state" and "conventional timing", respectively.
State mode is useful for tasks such as capturing the bus activity of a microprocessor to trace how it's executing software. Conventional timing mode is used to measure actual delays between signal transitions.
The specification is telling you that in state/synchronous mode, the 1661A can accept a clock of up to 100 MHz, but in conventional timing/asynchronous mode, it can generate a clock of up to 250 MHz, and by interleaving the channels it can achieve a 500 MHz sample rate.
At the highest rate, the analyzer is taking a sample every 2 ns, so this is the limit of the resolution you can get for measuring things like setup/hold times.
Also, keep in mind that any transition you see on the screen could have actually happened at any time ± 2 ns from the position shown, so if two transitions are shown within one sample interval of each other, you can't be certain which one actually occurred first. This makes it tricky to measure small intervals with a logic analyzer, and an oscilloscope is often a better tool for this sort of task.