What do you mean by "software based rendering"? I was under impression that all of rendering is on GPU; first with point clouds being calculated in a compute shader, and after that the points being converted to 'splats' and rendered as quads.
It is strange that Dreams didn't make a bigger splash. It is the first time an engine focused on making low-polygon assets look as aesthetically pleasing as possible. The ability to turn up the rendering roughness in some early sketches, and then tune it down once more details are added is incredible. I expected a small revolution with many competing engines reusing the tech, but nothing happened so far?
software based rendering doesn't use the fixed function hardware stages of the GPU. So it won't use the ROPs or other fixed function rasterisation hardware in the GPU, it instead does everything in software using GPGPU techniques.
I watched the talk you linked to, but I couldn't understand most of it. Currently reading through Real-time Rendering book to make sense of it.
I thought once they create splats as quads, the rest of pipeline is standard? Each quad should be 2 triangles that get clip view coordinates, then pixel shader paints the brush texture, and then ROP assembles them on screen. Could you point out where I'm wrong? And in what way can a pixel end up on the screen without going through ROPs?
I'm not actually sure anymore after looking at more content. Based on the video here https://www.youtube.com/watch?v=1Gce4l5orts&t=1225s . It appears that they are even using the splat engine anymore (which I thought they where), and are using the older 'brick' based engine.
It is strange that Dreams didn't make a bigger splash. It is the first time an engine focused on making low-polygon assets look as aesthetically pleasing as possible. The ability to turn up the rendering roughness in some early sketches, and then tune it down once more details are added is incredible. I expected a small revolution with many competing engines reusing the tech, but nothing happened so far?