6

I need to frame synchronize a stereo camera for a computer vision application. The synchronization should the best possible, hopefully on a millisecond level. I would like to get about 10 FPS either as video or still images. Resolution requirement is around 2 Mpix.

The application will be mobile stereo road/terrain scanner from a passenger car moving up to say 100 km/h during daytime (thus good lighting conditions). Rolling shutter should not be an issue with Pi camera at FullHD according to this and this video.

My current hardware search has ended up with a Raspberry Pi solution. I think basically I have 2 options with it:

  1. Get two standalone Raspberry Pi units each with a camera on it. Synchronize them via an external GPIO trigger. Cost is 2x25+2x16 = £82.

  2. Get the new Compute Module Development Kit and connect both cameras to the single board. Couple the two cameras for a simultaneous release with software. Cost is 126+16+16 = £158.

Will both options have similar sync results? Also I would rather like still image output than video output, because I will need to extract given frames from the video anyways. The still images would preferably be uncompressed not to loose image detail, but JPEG is OK too. I am not sure how many FPS can do the GPIO external trigger with still imaging.

Cost is an important factor for me. Using USB webcams probably wont work since USB devices cant be synced easily (see link below). I will be happy with another hardware recommendation if someone knows better stuff within a similar price range.

The origin of this question is on Video Production SE site in a post titled Creating synchronized stereo videos with 2 cameras at low cost. This link can reveal some more background.

Kozuch
  • 471
  • 6
  • 24
  • You want to use Raspberry Pi camera, or some USB webcam? – Kamil Aug 11 '14 at 22:46
  • How high resolution you need? What light conditions? – Kamil Aug 11 '14 at 22:52
  • I added some more info to the question according to what you ask for. I plan to use Pi camera, not USB webcams (USB can not be synced). – Kozuch Aug 11 '14 at 23:18
  • AFAIK, for cameras like OV5640, there's no clearly documented way of doing frame synchronization in Linux at single frame level in video mode. You may have better luck with still images but still limited by the non real time design of linux. – user3528438 Oct 24 '16 at 14:16
  • Similar question was [asked at Raspberry Pi SE](http://raspberrypi.stackexchange.com/q/22696/19069). – Kozuch Oct 24 '16 at 14:27

2 Answers2

1

I think your both options/ideas (Pi and Compute) can work.

Raspberry Pi GPIO is fast enough (10MHz+), reading GPIO input state every millisecond will be no problem at all.

If you want to process data on Raspberry Pi with OpenCV or something like that - don't expect too much, Pi CPU is not that fast as some people think. Do some experiments with pre-recorded video or images and see how much computing power you have.

If you find that Raspberry is not fast enough - you can use something like this (for image processing):

Nvidia Jetson TK1 Development Kit

- NVIDIA Kepler GPU with 192 CUDA cores
- NVIDIA 4-Plus-1 quad-core ARM Cortex-A15 CPU
- 2 GB memory, 16 GB eMMC
- Gigabit Ethernet, USB 3.0, SD/MMC, miniPCIe
- HDMI 1.4, SATA, Line out/Mic in, RS232 serial port
- Expansion ports for additional display, GPIOs, and high-bandwidth camera interface

I don't know much about this, just read somewhere about these Nvidia products, but it cost less than Pi Compute module (192USD is 115GBP?), and there is a lot more computing power.

Kamil
  • 5,926
  • 9
  • 43
  • 58
  • Thanks a lot for your answer! I do not plan any CV processing on the Pi. I only need to record the video in frame sync and I will do all other work on PC. Dont you know if there may be another board on the market that may have 2 CSI interfaces (or other camera interfaces but not USB camera) and is cheaper than Compute Module Kit and could do the same job? – Kozuch Aug 12 '14 at 00:09
  • 1
    Have you checked the time it takes for the PI camera module to take a picture, and how much deviation there is in that time? I think you might struggle with ms-level accuracy, not because of the triggering mechanism through GPIO, but the delays between the trigger and the photo being taken. Also, on a cloudy day, a 10ms shutter speed (1/100) isn't unheard of. – RJR Aug 12 '14 at 03:28
  • @Kozuch I think Compute Module Kit is cheapest device with two CSI interfaces. – Kamil Aug 12 '14 at 09:43
  • @RJR I haven't tested Pi camera response time, I don't have one. I was using cheap USB Logitech camera for my CV project. – Kamil Aug 12 '14 at 09:44
  • @RJR: I dont mind a delay between the trigger and photo being taken, given it will be exactly the same for both Pi units. Of course a long delay wouldnt be nice but again my main goal is frame synchronization. I guess both units should behave exactly the same in case I do a 100% same install and setup? – Kozuch Aug 12 '14 at 10:05
  • @Kozuch - The issue is that the trigger latency may **not** be determinisic, because linux may be doing something else in the background. Linux is *not* a realtime operating system. You may get away with what you want, but any realtime behaviour is going to be fairly brittle, and likely to be broken if you load the system at all. – Connor Wolf Aug 12 '14 at 11:20
  • @ConnorWolf: Would my option #2 (using Compute Module with 2 cameras on it) eliminate possible issues with trigger latency since it would employ a SW trigger that might actually be more low-level than GPIO trigger? – Kozuch Aug 12 '14 at 14:16
  • @Kozuch - I don't think so, though it would probably be a bit better. The issue is the delta between when you want to take the image, and when the hardware actually takes the image. If linux is servicing another thread, you have to wait for the next context-switch to actually have your thread do anything. – Connor Wolf Aug 13 '14 at 01:38
  • @ConnorWolf: The image/video capture will be the only task on my Pi setup and there will be no other load. Units will be disconnected from all COMs (ethernet etc.) However, I must agree with you on a theoretical level with RTOS. But RTOS with stereo camera is totally different price level, right? The question is how much of the RTOS capability will a clean Raspbian install be able to deliver. Simply if I get sync within few ms (say under 10ms) this is still great result for me in this price range. – Kozuch Aug 13 '14 at 08:02
  • @Kozuch - If you're shooting for 10 ms, rather then 1 ms, that's a lot more tractable, I think. Really, it'll /probably/ work, I just want to be sure you're aware of the possible complications down the road, particularly if later on you decide to start adding other tasks to the running rPi. – Connor Wolf Aug 13 '14 at 21:51
  • Really, running a RTOS doesn't mean it'll cost anything more, you could theoretically even just do it right on the rPi (run bare-metal). The issue is mostly lack of documentation. – Connor Wolf Aug 13 '14 at 21:51
  • It probably wont be that bad with a latency of a default Raspbian kernel: http://www.emlid.com/raspberry-pi-real-time-kernel-available-for-download/ I guess this may be considered a soft real-time and suit my needs more then enough (considering 1ms precision)? The average values are almost the same. – Kozuch Aug 21 '14 at 10:35
1

After lot of research and even myself trying to set up 2 RPis with one camera each for a frame synchronized stereo capture I did not really come all the way to precisely measure the sync visually but I realized just this at that time from various discussions and tech specs:

Even though it is possible to sync RPi boards themselves via common GPIO signal (for whatever task) it is not possible to sync their cameras for continuous stereo capture. The main reason is that the camera sensors and boards does not support external trigger functionality (neither v1 OmniVision OV5647 nor v2 Sony IMX219PQ camera board). The OV5647 actually has a so-called FREX input pin which could be used for frame sync but it is neither routed out of the sensor to the board nor is it supported in official RPi camera driver/software. The ArduCam board has a FREX pinout on board but still no software to use it.

It may be possible to sync a single shot (no continuous stereo) though - I guess there was a project for a bullet-time photography - I explored their python code at github (cant remember the exact project name now) - they synced RPis via Ethernet and then called raspistill binary synchronously for a single shot. I think this can work even as there was a 1000ms delay at the beginning of raspistill to allow cameras to set exposure (the timout is same on all cameras so the sync should still work). I think different exposure times may be an issue with PiFace though. Once raspistill is started (and hopefully after the first frame) the sensor enters so called "free running mode" where video is streamed from the sensor at given FPS. Because each camera board has its own oscilator the FPS will never be 100% same for two boards even if started at the same time and the frames will drift from each other the longer the video runs.

TL;DR:

While it may be possible to sync a single stereo shot it is not easily possible to frame synchronize two RPi camera boards for continuous stereo capture due to a lack of external trigger feature on the cameras' sensors.

Kozuch
  • 471
  • 6
  • 24
  • Another way to approach this problem is to gate the camera clock. If the camera you choose depends on a external clock source, like "mclk" input to OV5640, then you can use a FPGA or fast MCU to gate the clock until the HSYNC/VSYNC from both cameras are synchronized (you block the fast camera from reference clock for a few cycles to slow it down). – user3528438 Oct 24 '16 at 14:28
  • @user3528438 Will auto exposure (=various frame exposure times on the cameras) ruin the sync or is there a rule that the exposure time can not get longer than 1/FPS? – Kozuch Oct 24 '16 at 14:32
  • 1
    I haven't read any camera data sheet and firmware code so throughly to tell you either way, but my experience with ov5640 in this reference design kit https://www.e-consystems.com/CX3-Reference-Design-Kit.asp confirms that FPS settings overrides auto exposure: the video it captures is significantly dimer indoors when it's in 60FPS mode than 15FPS mode. – user3528438 Oct 24 '16 at 14:39