Hi, I try to create ground blending system for objects like grass, rocks, walls etc. and I would like to ask for some advice how to deal with problems I encountered. Note: I am using URP 7.3 and shader graph in my project. The main goal is to make system that allows to reduce hard edges between intersecting meshes, something like this: My mechanism works like this: There is additional orthographic camera looking at the ground from the top rendering selected layers with terrain to render texture (R16G16B16A16). There is additional blit material that makes RGB to store rendered color and Alpha to store depth. After that my custom shaders read global parameters and lerp between ground and own color based on depth from alpha. This works, however I am not satisfied with quality of results. The first and main problem is streching: Spoiler: Streching And the second one is blending dark and light textures, what is probably side effect of the first problem: Spoiler: Shadows The questions I guess the main problem is rendering ground with light, if so then how could I fix that? Possibly my method is not the best approach to blending, is there other, better way? Also I am not sure if using big 16 bit texture is actually wise, is it better to render color and depth separately? I would appreciate any advice and tips.
Yeah ... there’s not really a good fix for this one. The “best” option is to use tessellation to flare the geometry so it’s not a hard 90 degree intersection but instead a smooth slope. However I used the scare quotes on “best” because that only works if you can afford the extra cost of tessellation, and your geometry doesn’t have hard edges that split apart when moving the geometry. It also doesn’t fully remove the stripes. The next option would be to use triplanar mapping so the sides of the intersecting geometry. But this doesn’t fully hide the transition if your texture has details in it that don’t work well with triplanar mapping. One cheap option that’ll remove at least some of the striped look is to adjust the texture mip level based on distance from the surface. This will have the effect of blurring the texture which should remove most of the striping. You can even adjust how much you drop the mip level by the surface normal, so you only do it when the geometry is basically straight up, so slopes and the like won’t get blurred. The easiest way to do it might be something like: Code (csharp): float groundBlend = // 0.0 to 1.0 ground to surface blend // smoothstep to transition to biased value only when surface is nearly vertical // 8.0f is arbitrary, but it's basically how fast it gets blurry float bias = groundBlend * smoothstep(0.15, 0.05, worldNormal.y) * 8.0f; // sample texture, make sure you're using trilinear filtering or this will look terrible half4 ground = tex2Dbias(_GroundTexture, float4(uv, 0, bias)); // do blend It won't look amazing, but it may look better than the vertical stripes.
Truly interesting solution! It is not perfect indeed, but still much better than nothing. Now I need to find solution to remaining problem
Ah, yeah, forgot to talk about the second problem. This is actual two separate issues: normal direction and shadows. When doing the blending, you want to use the terrain’s normal at the intersection point, and quickly blend to the real surface normal. You want this transition to be a lot quicker than the texture blend. Preferably have it blend to the surface normal before the surface texture is visible. But that’s not really the biggest problem. The terrain textured area isn’t receiving any shadows! I’m not entirely sure how you’ve set this up though, so I’m not sure what solution to give you. I would have assumed you were using a custom shader on the objects that intersect with the terrain, but now I’m thinking you’re rendering those objects a second time, maybe using a render feature?
Ah, I forgot to explain what i do in the shader. There is renderer feature for ground camera and one for main camera, but it should not cause to render it twice. In shader I use some kind of toon shading - there are two textures: light and dark and I lerp between them and then I blend result with ground. Maybe I am doing this in wrong order (I am not sure how it should be done), however I encountered strange camera behaviour just now. When camera has render shadows turned on, it actually does opposite and I get no shadows area like in the picture above. When it is turned off and the camera object is not selected on the scene then I get this:
Ah, hmm. I've yet to use the URP for anything serious, or dig into render features. Either the shadow thing is a bug, or it's really badly documented and what that toggle actually does is render unique shadows for that view. For example, you might get a "bright spot" because that sphere isn't visible by that camera, so it doesn't cast shadows when Render Shadows is checked on, vs that camera reusing the main camera's shadow maps which does have that sphere when Render Shadows is checked off. That's certainly not what the documentation suggests, but that's a possible explanation of the behavior you're seeing. Or it's bugged and just backwards for render feature cameras. The way I might go about this would be to explicitly not rendering any lighting at all for the terrain camera. Render out the color & smoothness, world normal, and height. Have the shader doing the blend do all the lighting and shadows. Otherwise you'll never get the shadows on the blend to match the shadow on the objects being blended with. With how you're doing it now that last image is about as good as you'd be able to get. Really the way I have gone about this is a bit more involved, and skipping the whole second camera aspect entirely. Instead I usually write a custom simplified terrain shader that takes the terrain's splat maps and terrain textures and does all of it on the object's shader directly.
This! I thought about this, but I didn't make it, because I have read that there is texture limit (or sampler) per shader (8?) and sampling many of them is slow. Is it actually true? I don't use unity terrain, instead custom shader generates color from 4 ground textures based on vertex color of terrain mesh (it would be 8 with normals). This is related also to my last question - is it better to have 2-3 separate textures for depth, color and normals or keep them in single large texture (currently I have one large render texture with 16 bits per channel). Anyway, if number of textures is no big deal, it would be game changer and I would make it other way. About the 'render shadows' checkbox in camera - I will test it a bit and most likely just report it as bug, because it does not look like a feature
Yes, ish. The GPU is going to have a sampler limit per fragment shader. How many depends on the API and hardware, but it's somewhere between 4 and 32, but all desktop hardware supports at least 16, nothing limited to 4 is supported by Unity anymore, and 8 is going to be super rare. However, Unity's own shader code may use some of those samplers in the background. Surface Shaders (used with the built in rendering paths) was generally limited to 12 samplers for that reason, sometimes more or less depending on the features in use. I've been able to get 9 working in Shader Graph, so I don't know what the limit is. On most platforms you can get around the sampler limit by reusing sampler states on multiple textures. There's still a limit for the max number of textures per fragment, but it's way higher than you're ever likely to hit. The one caveat is OpenGLES doesn't support separate sampler states and textures. The common trick is to use a Texture2DArray or two to store terrain textures, ala the free MicroSplat terrain shader asset. That way you can have all of the terrain textures you need without running out of samplers ... assuming you're targeting a device that supports texture arrays. Sampling many textures in a shader can make it slower. How much is somewhere between not at all slower, and way, way slower. It depends on a ton of different factors so it's not as straight forward as "adding an additional texture sample will make the shader x% slower". However mobile devices are generally more affected by this. It's kind of in the realm of "you have to try to find out." Having a single texture would likely be more efficient, but also kind of impossible since there's no way you can pack a color, normal, and depth into a single texture while still being able to make use of texture filtering. I'd go with two textures, color in an ARGB32 or maybe RGB565 / ARGB1555 if you want to go as cheap as possible, and maybe try an ARGBHalf or maybe ARGB2101010 to store the (normalize 0.0-1.0) depth and normals in. Depending on the device it might even be more efficient to use three textures instead of two if you can more efficiently pack the data. Like you might be able to get away with an RGB565 color, RG16 normal, and RHalf for the depth, which is less data than a single ARGBHalf.
You are truly sage of shaders. Thank you very much for the help with this one and many other things (indirectly) in other threads, I really appreciate your work!