6

I have a camera rigidly mounted together with an IMU that consists of 2-dof gyroscopes and 3-dof accelerometers. My aim is to compute the global optical flow of the image captured by the camera, in order to do global motion compensation when compressing the camera video stream. Instead of doing image processing to compute global optical flow, is it possible to obtain the optical flow based purely on the IMU readings?

Ang Zhi Ping
  • 577
  • 1
  • 6
  • 14
  • I do not think the first three tags really apply as you are asking if you can do it without the image processing as far as I can tell. – Kortuk Jan 10 '12 at 00:58
  • How good is you IMU - they vary vastly. Can you advise barnd and model. What sort of results are you wanting to achieve and what are you able to achieve optically? – Russell McMahon Jan 10 '12 at 02:20

1 Answers1

2

If by optical flow you mean the movement over time that the board does, it's possible to do that; I implemented a tracking algorithm using a Kalman filter over a STM32 microcontroller to get the orientation of an IMU, but it's a big amount of computation and I'm not sure that it could reduce the load compared to image processing.

Edit:

I actually think it could be feasible, and in fact it's what camera manufacturer do with image stabilization.

Caveats:

  • It's not that hard to do it, but I believe it's quite hard to do it well, or at least in a meaningful way;

  • IMUs are typically oriented to slow-ish and accurate tracking, instead you need to compensate higher frequencies of motion, like vibration and shake. You'll need to tune your tracking accordingly.

In short, you have to convert the x/y/z accelerometer axes and pitch/yaw gyroscope axes to the same set of coordinates, and apply the desired amount of filtering. Then integrate twice the accelerations to get the (linear angular) speed and the velocity of the movement, as desired. That's what you then have to compensate in the video.

You'll probably have to compensate some delays, either in the video or in the sensor readings, to time-match the compensation with the recording.

clabacchio
  • 13,481
  • 4
  • 40
  • 80
  • More specifically, the instantaneous frame-to-frame optical flow. I do not require the total accumulated movement of the board, which may cause errors to integrate. Any specific details regarding your project? i.e. model of IMU? Doing optical flow on hardware may be more computationally intensive than Kalman filtering. – Ang Zhi Ping Jan 12 '12 at 02:54
  • I don't know exactly what you mean with IMU, but if you mean an integrate system that computes automatically the variation of pitch, roll and yaw, I didn't use a standard one. I used a board built at the laboratory where I did that project, it was a normal STM32 microcontroller (don't remember the exact version, i think it was a 32MHz but that's all) with some MEMS sensors (3-axis accelerometer and gyroscope and a magnetometer, but you don't need the latter for your job). We just implemented a basic Kalman filter to merge the data from the three sensors, using quaternions (for gimbal lock)... – clabacchio Jan 12 '12 at 08:13