2

In a project with FPGA stereo vision I use two MT9V032 cameras. The cameras are connected as in the application example in the data sheet.

Stereo vision topology

In stereo output mode the data length is 18bits long. There is one start bit and one stop bit. So there are 8 bits for each pixel of master and slave camera.

Data format stereo vision

In Application Note XAPP1064 I read that maximum bitlength of 16bits is possible for Spartan 6 Deserialization. Is there a trick to support higher deserialization factors?

As I do not need the start and stop bit, is there a solution to deserialize only the bits between start and stop bit?

Timm
  • 35
  • 5
  • One possible solution which came into my mind is: The clock for the camera is generated by the FPGA. The clock in the FPGA is 2* SYS_CLOCK of the camera. The clock must be divided by 2 before output to camera. The clock source of the ISERDES2 blocks is the internal clock. Two cascaded ISERDES2 blocks are used to decode 5 + 4 bits. So I get 9bits for one internal clock. With the second clock, I get another 9bits. – Timm Jul 27 '16 at 15:44
  • 9bit serdes factors are not possible – Timm Jul 27 '16 at 16:09
  • If 9bit is not possible in each block, you do 9bit serdes to get a half width data bus, and then do 2:1 deserializing within the FPGA fabric now that the data is at a much slower rate. – Tom Carpenter Jul 28 '16 at 00:03

1 Answers1

1

You will probably need to use a gearbox to re-pack the bits from whatever deserializer width makes the most sense. Altera has some example gearboxes in their cookbook, as well as some design techniques.

alex.forencich
  • 40,694
  • 1
  • 68
  • 109