I was working on an edge detection shader when I stumbled upon something strange. The left image shows the scene view with normals being visualized. The right image shows the same thing after being passed through a shader that multiplies brightness by 1000 and subtracts 999, showing extremely small imperfections in the normals. I could add a threshold to ignore these tiny difference but I would prefer to be able to work with precision. Is there a way to fix this? (I am working in the standard 3d pipeline of the latest version, not URP or HDRP)
This seems to only occur when the faces are pointing exactly in one of the six cardinal directions (when one of the values is -1 or 1) and the effect is surprisingly stable when moving the camera side to side. Also, the rotation of the object seems to matter for some reason (so a cube rotated 0 degrees and a cube rotated 90 degrees will fail in different ways)
Well, if you multiply (1,0,0) by 1000 and substract 999 you'll get (1, -999, -999). I don't think that's what you wanted, however I'm not sure what your shader exactly does...
That is the intended result of this shader, this shader is meant to show that not all of the values that are supposed to be 1 are actually 1. Some are 0.9999 or something, so the operation produces (0.9,-999,-999) instead. The image on the right should not have stripes. The shader I'm working on that is hindered by the 0.9999s is an edge detection one, that I want to be extremely sensitive, but because of that it is detecting these very minor differences.
This is the edge detection shader. You can see that the faces that have the stripes cause errors in the edge detection. The blocks that are at angles are fine.
But only 6 directions in world space are supposed to include 1s.. I still don't get it. How exactly are you visualizing these errors (stripes)?
That is exactly my point. Any angle which does not incude a 1 or -1 works properly, but when unity is supposed to generate a normal of either 1 or -1, it has a 50% chance to do so and a 50% chance to be off by a usually-imperceptibly-small amount.
Do you normalize the input normal in fragment shader? I believe everything is linearly interpolated between vertices, including normals, so they actually can be not of unit length. May be there are some precision artifacts during interpolation of (1,0,0) into (1,0,0) as well..
I also checked and seems like in some cases they are simply not exactly 1.0 Code (CSharp): return normal; Code (CSharp): return any(normal == 1);
This is using the built in camera depth normal texture, yes? These are stored in a 2 channel low precision view space mapping. Nothing will ever be consistently perfectly “1.0” in world space due to this. You’ll need to use some amount of biasing to deal with the fact there’s always a bit of error / noise in the normal. If you’re using deferred rendering and the normal gbuffer, or producing the normal texture yourself, please post your actual code. Also know that a flat surface interpolating a normal map not produce perfectly constant values due to precision issues. However I would expect flat axis aligned surfaces to work if you’re interpolating world space values and normalizing the output.
Yes, it was the built in normal texture. I guess I can add a tiny bias, I just hoped to be able to avoid it. How difficult would it be to generate my own, truly flat maps?
You’d need to write your own replacement shader that calculates world normals instead of the view space normals, and output it to a ARGBHalf texture instead of the two channels of an ARGB32 like the built in does. This is what the built in one looks like: https://github.com/TwoTailsGames/Un...rcesExtra/Internal-DepthNormalsTexture.shader And here’s how to use replacement shaders: https://docs.unity3d.com/Manual/SL-ShaderReplacement.html