8

I'm looking for a generic algorithm to calculate a red/cyan anaglyph starting from the original image and its black/white depth map, as in this example?

That algorithm is used, for example, in Photoshop but I can't find a readable explanation to reproduce it.

Mike Partridge
  • 6,587
  • 1
  • 25
  • 39
TheUnexpected
  • 221
  • 1
  • 5

2 Answers2

1

This paper might be helpful. It focuses on outdoor/landscapes scenes. The following is an excerpt from the paper's abstract:

This paper presents a new unsupervised technique aimed to generate stereoscopic views estimating depth information from a single input image. Using a single input image, vanishing points/lines are extracted using a few heuristics to generate an approximated depth map. The depth map is then used to generate stereo pairs.

  • Welcome to Stack Exchange! Could you provide a bit more than just a link? What page the algorithm and it's explanation are on? The PDF also seems quite heavy (so not accessible for every visiting user) and there is no guarantee that it will be available in the near future. – Tamara Wijsman Dec 15 '11 at 20:54
  • 1
    Admittedly I don't know a lot about the subject but decided to give google a try. The paper I found seemed to come pretty close to an answer, so sharing it seemed like a good idea – Dario Hamidi Dec 15 '11 at 21:15
1

Here's an explanation of the Photoshop gag: http://www.threadless.com/profile/433934/elleevee/blog/493381/Threadless

Basically they're using the displace filter to shove only the red channel or the blue+green channels of an image left or right. The heightmap just attenuates the distance that you displace each pixel. I believe the displace filter interpolates the inbetween values, i.e. if the pixel at (10,0) was displaced to the left by 3 pixels, and pixel (11,0) was displaced to the left by only 1, then the two pixels in between the target pixels would be interpolated from the two original values at 66% and 33%.

That interpolation effectively is covering for missing data: if you really had two viewpoints of your scene, then these pixels would represent information hidden in the single view. I can imagine an upgrade to the method outlined above, where reconstruction algorithms similar to Photoshop's content aware fill could take a better stab at filling in the missing information.