In my project I use deferred lighting, HDR and linear pipeline (not PBR).
There is this annoying banding in albedo. It happens on subtexel level, so after initial sampling and linear interpolation. All images are in lossless PNG.
Pic1 banding example: https://i.imgur.com/phMaSYr.png
Pic2 banding example: https://i.imgur.com/CWAnxLJ.png
A friend of mine suggested me to try and switch albedo buffer to sRGB. It worked:
Pic3 no banding https://i.imgur.com/6vmLvuB.png
But there is a problem. I have two types of geometry. regular polygonal geometry and raymarched geometry (fractal) drawn into the g-buffer by quad command after scene path. After I switched albedo buffer to sRGB polygonal geometry became darker, while raymarched one stayed the same:
Pic4 before-after comparsion: https://i.imgur.com/47qFQxZ.png
I have to do liear-to-gamma pow(albedo, 1/2.2) in model’s pixel shader to compensate for that. So it seems like GPU correctly converts information when quad command writes into sRGB rendertarget, and when lightpass reads it, but not when scene pass writes into it.
Is it a bug? a feature?
I can also fix this banding by setting albedo buffer to rgba16f format. But that will increase memory bandwidth. And I can’t see any quality compromises comparing 8bit srgb to 16bit.