9

I just saw a video about what the publishers call the "next major step after the invention of 3D". According to the person speaking in it, they use a huge amount of atoms grouped into clouds instead of polygons, to reach a level of unlimited detail.

They tried their best to make the video understandable for persons with no knowledge of any rendering techniques, and therefore or for other purposes left out all details of how their engine works.

The level of detail in their video does look quite impressive to me.

  • How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise)
  • If this is real, why has nobody else even thought about it so far?

I'm, as an OpenGL developer, really baffled by this and would really like to hear what experts have to say. Therefore I also don't want this to look like a cheap advert and will include the link to the video only if requested, in the comments section.

Jan Doggen
  • 1,140
  • 4
  • 16
  • 22
  • 7
    Well, they've invented the most revolutionary thing since the beginning of computer graphics, yet they don't know how many millimeters fit into an inch, what does that tell you. –  Aug 01 '11 at 20:42
  • 2
    They are so limited on detail (forgive the pun) about their technology that a discussion is hard. From what I understand from the video it's 20fps in software. I see a lot of static geometry, a whole bunch of instancing and don't know how much of the data is precomputed or generated on the fly. Still interesting though. I wouldn't want to call shenanigans completely. Not with the funding acquired, although that does not mean a whole lot. – Bart Aug 01 '11 at 20:45
  • 1
    @Bart: Well, funding is the entire point of that video... :-) There's always a fool who will give you money for the perpetuum mobile or for the potion of longevity. Or for infinite detail rendering. –  Aug 01 '11 at 21:20
  • 1
    @Damon Haha, true. Though thinking about it, and from what I'm seeing, I'm not sure it's impossible. It's the boasting in the video that makes it hard to believe. But I could think of a tech-demo scenario, based on existing technologies, that could do this. An actual game though would be a completely different story. – Bart Aug 01 '11 at 21:26
  • 8
    It's always suspicious if someone makes fantastic claims and only shows footage of something entirely unrelated (such as Crysis). Even more so if there are claims like "technology as used in medicine/space travel", and mixing several things that have nothing to do with each other. Certainly it is possible to produce (nearly) infinite detail procedurally and it is certainly possible to render that. So what, every 1984 mandelbrot demo could do that. However, the claim is that they render objects like that elephant in infinite detail. And that's just bollocks, because they can't. –  Aug 01 '11 at 21:35
  • 8
    "Your graphics are about to get better by a factor of 100,000 times." Extraordinary claims require extraordinary evidence. – Brad Larson Aug 01 '11 at 22:18
  • 11
    Notch [wrote two](http://notch.tumblr.com/post/8386977075/its-a-scam) [blog posts](http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam) about this video. – Kevin Yap Aug 04 '11 at 05:40
  • 2
    Why was this question migrated? – aib Aug 04 '11 at 10:54
  • 1
    @aib: the current moderators are very strict on SO. The meta sites can be used to discuss this. It seems to me that people like this policy, even if it leads to unwarranted closing or migration of some questions. Lessening the strictness would undoubtedly lead to more noise. So the current consensus is to accept that false positives happen with moderation, but in return we get a very clean site. – Tamás Szelei Aug 08 '11 at 14:41
  • @Tamás Szelei: I highly disagree with that policy but this is not the place to discuss it. Thank you for your answer. – aib Aug 08 '11 at 22:13
  • @aib, question was closed in SO because it doesn't fit the guidelines, I voted to migrate it here, where it does fit the guidelines, so it could be reopened. The separation between the two sites is confusing and I have asked moderates about it too. – Danny Varod Sep 07 '11 at 10:23
  • @KevinY If anyone knows anything about performance, it's Notc OUT OF MEMMORY ERROR – Ben Brocka Oct 18 '11 at 21:53
  • This looks a bit like one of those companies that promise miracles to rip off clueless investors. – user281377 May 16 '12 at 12:08
  • While I think these guys are somewhat full of it I do recall talk of moving from a polygon model to a spherical model for rendering mentioned in my graphics course. The images were stunning but it was not something that was done on the fly...no idea how long it took to render one frame. – Rig May 16 '12 at 12:21

7 Answers7

11

It's easy to do that. Using an Octtree you simply divide the world into progressively smaller pieces until you reach the level of detail needed. This might be the size of a grain of sand for example. Think Minecraft taken to an extreme.

What do you render then? If the detail is small enough you may consider rendering blocks - the leaf nodes of the octtree. Other options include spheres or even geometric primitive. A color and normal can be stored at each node, and for reduced LOD one can store composite information at higher levels of the tree.

How can you manage so much data? If the tree is an actual data structure you can have multiple pointers reference the same sub-trees, much like reusing a texture but it includes geometry too. The trick is to get as much reuse as possible at all levels. For example, if you connect 4 octants in tetrahedral arrangement all to the same child node at all levels, you can make a very large 3d sierpinsky fractal using almost no memory. Real scene will be much larger of course.

The problem is that it will only work for static geometry because real animation would require manipulation of all that data every frame. Rendering however, especially with variable LOD is no problem.

How to render such a thing? I'm a big fan of ray tracing, and it handles that type of thing quite well with and without a GPU.

All of this is speculation of course. I have no specific information on the case you're talking about. And now for something related but different:

A huge amount of data rendered

EDIT And here is one that I did, but I deliberately altered the normals to make the boxes more apparent:

Stanford bunny in voxels

That frame rate was on a single core IIRC. Doubling the depth of the tree will generally cut the frame rate in half, while using multiple cores will scale nicely. Normally I keep primitives (triangles and such) in my octtree, but for grins I had decided to render the leaf nodes of the tree itself in this case. Better performance can be had if you optimize around a specific method of course.

Somewhere on ompf there is a car done with voxels that is really fantastic - except that it's static. Can't seem to find it now...

Caleb
  • 38,959
  • 8
  • 94
  • 152
phkahler
  • 501
  • 3
  • 8
  • I agree with this assessment: just watched the video myself and was struck by how static their scenes are (funny when they compare with polygon grass; at least it's blowing in the wind while theirs clearly isn't). – timday Aug 01 '11 at 22:16
  • 1
    Unfortunately the links are dead. – Joachim Sauer May 16 '12 at 10:11
  • 1
    @JoachimSauer Indeed, the OMPF forum has gone down some time ago. A direct replacement [is available here](http://igad2.nhtv.nl/ompf2/) but I'm unaware whether any of the content has been migrated. – Bart May 16 '12 at 10:17
  • Doesn't look like it. Searching for "bunny" on that forum turns up no results for me. – Joachim Sauer May 16 '12 at 10:18
  • the first link opens pop ups and alerts box's -1 – NimChimpsky May 16 '12 at 12:54
6
  • How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise)

From watching the video nothing indicates to me that any special hardware was used. In fact, it is stated that this runs in software at 20fps, unless I have missed something.

You'll perhaps be surprised to know though that there have been quite a lot of developments into real-time rendering using a variety of technologies such as ray tracing, voxel rendering and surface splatting. It's difficult to say though what has been used in this case. (If you're interested, have a look at http://igad2.nhtv.nl/ompf2/ for a great real-time ray tracing forum, or http://www.atomontage.com/ for an interesting voxel engine. Google "surface splatting" for some great links on that topic)

If you look at the movie you'll notice that all geometry is static and although detailed, there is quite a lot of object repetition, which might hint at instancing.

And there will most likely be a lot of aggressive culling, levels of detail and space partitioning going on.

If you look at the visual quality (not at geometrical complexity) it does not look all that impressive. In fact it looks fairly flat. The shadowing shown might be baked into the data and not be evaluated in real-time.

I would love to see a demo with animated geometry and dynamic lighting.

  • If this is real, why has nobody else even thought about it so far?

Unless I'm completely wrong (and it wouldn't be the first time that I am) my first answer would suggest a (perhaps very clever) use of existing technology, perhaps optimized and extended to create this demo. Making it into an actual game engine though, with all of the other tasks besides rendering static geometry that that includes, is a whole different ball game.

Of course all this is pure speculation (which makes it a lot of fun to me). All I'm saying is that this is not necessarily a fake (in fact I don't think it is and am still impressed), but probably not as groundbreaking as they make it sound either.

Bart
  • 659
  • 4
  • 11
5

These atoms actually are not that magic/special/alien to current graphics hardware. It's just a kind of point cloud or voxel-based rendering. So instead of triangles they render points or boxes, nothing unachievable with current hardware.

It has been and is done already and is not the super invention, but maybe they came up with a more memory and time efficient way to do it. Although it looks and sounds quite interesting, you should take this video with a grain of salt. Rendering 100,000 points instead of a fully textured polygon (that already takes up only a few pixels on screen) doesn't make your graphics quality better by a factor of 100,000.

And by the way, I've heard id software is also trying out GPU accellerated voxel rendering, but I have a bit more trust in John Carmack than in the speaker of this video :)

Christian Rau
  • 630
  • 8
  • 11
2

That was an investment scam.

As for the idea, it isn't feasable on current non-dedicated hardware. The amount of points you would need to avoid gaps when looking at something close up is far beyond the amount of points you could fir in todays RAM. Even if, I don't know of any data structures or search algorithms that would yield anything near the performance shown in the demo. And even if, it were somehow possible to search these points in real time, cache misses and memory bandwidth would ensure that you can't.

I'm not doubting the fact that such images can't be achieved in real-time, just not with the method presented. My guess is that the demos were rendered with voxels, which have been used for decades and can already produce fairly high detail in real time: http://www.youtube.com/watch?v=BKEfxM6girI http://www.youtube.com/watch?v=VpEpAFGplnI

Hannesh
  • 169
  • 4
  • 2
    "That was an investment scam"... what do you base that on? Especially given the funding acquired recently and the videos being uploaded today? – Bart Aug 01 '11 at 20:52
  • 1
    What videos? They haven't updated in over a year. – Hannesh Aug 01 '11 at 21:03
  • 1
    http://www.youtube.com/user/EuclideonOfficial as posted by the OP in the comments. – Bart Aug 01 '11 at 21:06
1

From what I saw, it seems like they are using parametric shapes instead of simple polygon shapes - in other words they change the geometry according to the required resolution.

This can be done using techniques such as geometry shaders & perlin noise.

Another possibility is using GPGPU (e.g. CUDA) to render scene including non-polygons and to perform ray-tracing (for z-order and shadows). Another possibility is a custom hardware that renders formulas instead of triangles

Danny Varod
  • 1,148
  • 6
  • 14
0

I think of all of their claims, the compression of memory seems like an exaggeration, I could understand something like RLE compression having a great impact. In the ends I think this system will have a lot of "pros", but a lot of "cons", much like ray-tracing, or iso-surface rendering with marching cubes.

As far as rendering 'trillions' of atoms; I don't think they're saying that. What they are instead doing is searching for W * H atoms, i.e. one atom per pixel on the screen. This could be accomplished in a lot of slow difficult ways. Some ways of speeding this up are KD Trees, and BSP Trees, Octrees, etc. In the end though, it is a lot of data being sorted through, and the fact that their demo apparently sorts 1440x720 atoms, more then once per frame, because of shadows / reflections in their demo, is amazing. So cudos!

Martijn Pieters
  • 14,499
  • 10
  • 57
  • 58
-1

the way it works is much simpler then you might think, instead of pre-loading say a game level it only loads a single screen, one or a few atoms per pixel on your screen, nothing more, the game/engine then predicts what the next frames are and that's the only thing loaded, only the part of the object visible is rendered, not the entire object itself. PROS: as much definition and resolution your monitor can handle, low memory usage CONS: read rate from disk is fairly large and could lead to low frame rate.

marty
  • 1
  • this doesn't seem to offer anything substantial over points made and explained in prior 7 answers – gnat Jul 10 '15 at 08:56