Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What are shaders? (thatonegamedev.com)
39 points by lylejantzi3rd on May 31, 2023 | hide | past | favorite | 18 comments


I like shaders as much as the next guy, but this page is almost content-free. If you're looking to learn more about shaders and like interactive demos, I'd recommend:

https://thebookofshaders.com/


Some discussions:

The Book of Shaders (2015) - https://news.ycombinator.com/item?id=32117536 - July 2022 (22 comments)

The Book of Shaders - https://news.ycombinator.com/item?id=23497924 - June 2020 (26 comments)

The Book of Shaders: Noise (2015) - https://news.ycombinator.com/item?id=18868811 - Jan 2019 (1 comment)

The Book of Shaders - https://news.ycombinator.com/item?id=15599935 - Nov 2017 (37 comments)

The Book of Shaders - https://news.ycombinator.com/item?id=11457322 - April 2016 (15 comments)

The Book of Shaders - https://news.ycombinator.com/item?id=9215582 - March 2015 (40 comments)


I see very intriguing list of contents, but the links lead only up to "Fractional brownian motion". So, basically, 3 incomplete chapters out of 6. And there's a "Subscribe for the news letter" button. The website is dated 2015.

So, what's up with the project? Is it essentially abandoned or what? There appear to be 2 month old commits in the repository though. Or is it just being written at an extremely slow rate? I don't get it.

Edit: yeah, it appears to be abandoned. https://github.com/patriciogonzalezvivo/thebookofshaders/iss...


It will likely never be finished, but the stuff that is there is solid gold, and is how I first learned shaders years ago.


I'm seriously thinking of writing a shader book myself.

I'd like to write it in WGSL tho... and I'm not sure if it deters potential readers simply because it's not HLSL.


Also, for learning by example this website is gold:

https://www.shadertoy.com/


I've read parts of both linked articles, but so far the one question I have hasn't been answered.

Why are they called shaders?


Before you could do fancy things with them like generative AI, run physics simulations, or mine cryptocurrencies, or even real-time ray-tracing, graphics cards were meant for, well, rasterised graphics.

Even before the programmable pipeline of today, graphics pipelines were fixed-function: see OpenGL before v2.0, or DirectX before v9. And there were (are) algorithms to colour fragments after/before rasterisation, called shading algorithms, like Phong shading, Blinn-Phong shading, Gouraud shading, etc.

So, the name stuck, even though now a shader can do a lot more than just set `gl_FragColor = vec4(1, 0, 0, 0)`.


Given how vastly multipurpose GPUs are now for all manner of parallel processing, why is everyone still beholden to the paradigm of shaders? It seems weird to shove so much specialized computation (like ML) into this weird graphics programming metaphor, using vertex and fragment shaders, etc. Is the industry working to create a more generalized version of GPUs in the future? Sort of like...PPUs? Parallel processing units?


> Is the industry working to create a more generalized version of GPUs in the future? Sort of like...PPUs? Parallel processing units?

That's what GPUs already are. GPU compute frameworks like CUDA, ROCm, Sycl, etc make it easy to actually use GPUs for compute, which is what I think all the Big Science people are doing, including the applications I mentioned. If I recall correctly, the Ethereum GPU miner depends on CUDA being installed for NVIDIA graphics cards. There are many, many cosmological simulations that are run almost entirely on the GPU, also CUDA-accelerated.

> why is everyone still beholden to the paradigm of shaders?

I don't think this has happened since CUDA. Sure, people used to do weird but interesting hacky things like write compute shaders as fragment shaders, which wrote into 2D/3D textures to be subsequently read by the CPU. However, we now actually have GPGPU programming, so it's a bit unconventional to use compute shaders for any serious compute work, unless you do need graphics API integration (DX, OpenGL, Vulkan, etc) for use-cases such as particle and cloth simulations in games.

At any rate, shaders are convenient for graphics, because without loss of generality, the overwhelming majority of video games do the same things in the same order. They receive a scene, consisting of:

- some geometry

- some parameters attached to said geometry (materials, colours, textures, etc)

- the lights in said scene

- a camera with some parameters (raster resolution, or the 'sensor resolution' of the camera; a viewing frustum, which determines the field of view, the near/far planes, etc)

- a list of post-process effects (anti-aliasing, bloom, motion blur, and now upscaling like DLSS)

and they tell the GPU to perform the following:

- transform and cull/clip the geometry to the camera viewing frustum

- rasterise the geometry to fragments, while supplying coordinates of the interpolated primitive, including vertex coordinates, normals, etc, usually supplied as out parameters from a vertex shader

- interpolate any parameters, sample any textures in the fragment shader, and colour a pixel

- apply post-processing effects, also run as shaders

The shader paradigm is very useful, because it allows developers to work per-pixel, in an almost functional-programming manner. If you consider the shader as a gigantic lambda function that is executed per-pixel, having a fixed set of outputs for a fixed set of inputs, then it becomes easy to extend, and do things like forward and deferred rendering, where shaders write to G-buffers, which are then used for inputs for more shaders, and so on. So the GPU is a gigantic map-reduce hardware accelerator, even when using shaders.

The only reason dedicated ray-tracing hardware was added to modern GPUs is because ray-tracing completely turns the above pipeline on its head. It involves firing rays out of the camera into the scene, and testing for interesections, which is in essence, solving a giant simultaneous equation. This has a lot of branches and high-precision floating-point maths, which (consumer) GPUs were famously bad at until fairly recently; ergo, the best production ray tracers for feature films (RenderMan, Mitsuba, MoonRay, etc) are still primarily CPU ray-tracers.


And even before that point, shaders would be programs you would run as user programmable components of non-realtime software renderers. IIRC it was Pixar who used the term shader first in Renderman back in the 80s.


Because originally they were used to compute shading (in real time) for the scene being rendered. Shading, calculated per pixel, being a very computation-intensive operation.

But turns out you can do so much more with them than computing shading. The name just stuck.


Originally they were for lighting, where each pixel of a triangle is shaded based on its illumination. When other kinds of graphics processing functions were added (eg manipulating geometry) they kept the name.


Back in the old comic book days, the act of drawing darker parts of a person or object to indicate lighting or shadows was called “shading”. The name stuck because the first shaders were focused on that. Later, these GPU programs were used for other things, leading to somewhat nonsensical-sounding terms like “vertex shader”. But yeah, that’s why they were originally called that: they shaded the dark parts of an image.


Because they're "shading" pixels, at the very base.

Essentially they're just programs for your GPU.


I have always assumed it's that originally their purpose was to provide the shade of each pixel.


For a brief time when they were still written in assembly, they were called "programs" in OpenGL, which I thought was a better name for them. I guess "shaders" stuck because it was also used in Direct3D, even though it's a more confusing name.


https://en.m.wikipedia.org/wiki/Shader#Vertex_shaders

Tldr originally programmatic functions loaded in the pixel shading hardware pipeline




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: