unclad5968 18 hours ago

This is awesome! At the end you mention the 27k dragons and 10k lights just barely fits in 16ms. Do you see any paths to improve performance? I've seen some demos on with tens/hundreds of thousands of moving lights, but hard to tell if they're legit or highly constrained. I'm not a graphics programmer by trade.

I need a renderer for a personal project and after some research decided I'll implement a forward clustered renderer as well.

  • logdahl 18 hours ago

    Well, the core issue is still drawing. I took another look at some profiles again and seems like its not the renderer limiting this to 27k! I still had some stupid scene-graph traversal... But clustering and culling is 53us and 33us respectively, but the draw is 7ms. So a frame (on the GPU-side) is like 7ms, and some 100-200 us on the CPU side.

    Should really dive deeper and update the measurements for final results...

    • godelski 11 hours ago

      I haven't look at the post in the detail it deserves, but given your graphs the workload looks pretty bursty. I'd suspect there are some good I/O optimizations or some predication. Definitely that last void main block looks ripe for that. But I'd listen to Knuth, premature optimization and all, so grab a profiler. I wouldn't be surprised if you're nearing peak performance. Also NVIDIA GPUs have a lot of special tricks that can be exploited but are buried in documentation... if you haven't already seen it (I suspect you have), you'd be interested in "GPU Gems". Gems 2 has some good stuff on predication.

      But also, really good work! You should be proud of this! Squeezing that much out of that hardware is no easy feat.

  • gmueckl 18 hours ago

    This seems fairly well optimized. There's probably room to squeeze out some more perf, but not dramatic improvements. Maybe preventing overdraw of shaded pixels by doing a depth prepass would help.

    Without digging into the detailed breakdown, I would assume that the sheer amount of teeny tiny triangles is the main bottleneck in this benchmark scene. When triangles become smaller than about 4x4 pixels, GPU utilization for raterization starts to diminish. And with the scaled down dragons, there's a lot of then in the frame.

    • spookie 15 hours ago

      This is by far the biggest culprit OP, look into this.

      You can try to come up with imposters representing these far away dragons, or simple LoD levels. Some games do use particles to represent far away and repeated "meshes" (Ghost of Tsushima does these for soldiers far away).

      Lot's of techniques in this area ranging from simple to bananas. LoD levels alone can get you pretty far! Of course, this is at the cost of having more different draw calls, so it is a balancing game.

      Think about the topology too, hope these old gems helps getting a grasp on the cost of this:

      https://www.humus.name/index.php?page=Comments&ID=228

      https://www.g-truc.net/post-0662.html

      • logdahl 13 hours ago

        Yeah, I use LODs already but as you say, even my lowest lod far away is too many vertices. Imposter rendering seems very interesting but also completely bonkers (viewing angle, lighting)!

  • zokier 16 hours ago

    Worth noting that the GTX 1070 is nearly 10 year old "mainstream" GPU. I'd imagine a 5090 or something could push the numbers fair bit more higher.

  • cullingculling 5 hours ago

    (GPU-driven) occlusion culling with meshlet rendering would help a lot while being relatively straightforward to implement if you already have a GPU-driven engine like OP does. Occlusion culling techniques cull objects that are completely hidden behind other objects. Meshlets break up objects (at asset build time) into tiny meshlets of around 64 to 128 triangles, such that these meshlets can be individually occlusion culled. This would help a lot by allowing the renderer to skip not just individual parts of the dragons that are hidden behind other dragons, but even parts of each dragon that are occluded by the rest of the dragon itself! There's a talk on YouTube about the Alan Wake 2 team implementing these techniques and being able to cull complex outdoor scenes of (iirc) hundreds of millions of triangles down to around 10-20 million.

    The basic idea is to first render as normal some meshes that you either know are visible, or are likely to occlude objects in the scene (say the N closest objects, or some large terrain feature in a real game). Then you can take the resulting depth buffer and downsample it into something resembling a mipmap chain, but with each level holding the max depth of the contributing pixels, rather than the average. This is called a hierarchical Z (depth) buffer, or HZB for short. This can be used to very quickly, with just a few samples of the HZB, test if an object's bounding box is behind the all the pixels in a given area and thus definitely not visible. The hierarchical nature of the HZB allows both small and large meshes to be tested at the same performance cost.

    Typically, a game would track which meshlets were known to be visible last frame, and start by rendering all of those (with updated positions and camera orientation, of course). This will make up most of what is drawn to the scene, because typically objects and the camera change very little from frame to frame. Then all the meshlets that weren't known to be visible get tested against the HZB, and just the few that were revealed by changes in the scene will need to be rendered. Lastly, at some point the known visible meshlet set should be re-tested, so that it does not grow indefinitely with meshlets that are no longer visible.

    The result is that the first frame rendered after a major camera change (like the player respawning) will be slow, as all the meshlets in the frustum need to be rendered. But after that, the scene can be narrowed down to just the meshlets that actually contributed to the frame, and performance improves significantly. I think this would be more than enough for a demo, but for a real game you would probably want to explore methods to speed up that first frame's rendering, like sorting objects and picking the N closest/largest ones so you can at least get some occlusion culling working.

monster_truck 3 hours ago

Am I missing a link somewhere or is there no way to build/run this myself? Interested to see what a modern flagship gpu is good for

rezmason 17 hours ago

Ten thousand lights! Your utility bill must be enormous

  • Flex247A 17 hours ago

    Lights in games use real electricity :)

    • amelius 17 hours ago

      Even the stars use real electricity.

      • cluckindan 13 hours ago

        Not really, nuclear fusion doesn’t run on electrons.

        • DiabloD3 10 hours ago

          So where does the magnetic field come from? ;) ;) ;)

          • cluckindan an hour ago

            Nuclear fusion produces a million times more energy from proton and neutron collisions than is produced by electron shells during the same event.

wizzwizz4 3 hours ago

> As some other renderers do, we share a single GPU buffer for all vertex data. Instead, we use a simple allocator which manages this contigous buffer automatically.

I'm not sure what this part is supposed to say, but it doesn't look right. "Instead" usually follows differences, not similarities.

fabiensanglard 18 hours ago

This website has a beautiful layout ;) !

  • logdahl 17 hours ago

    Fun to see you ;) Love your site!

zeristor 18 hours ago

Apostrophe as a number separator?

Where’s that from?