Search Unity

Modifying depth when parallax mapping

Discussion in 'Shaders' started by jvo3dc, Oct 26, 2015.

  1. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I'm trying to modify the depth (z-buffer) value in combination with parallax mapping. I know that this breaks early z-rejection, so it can't be used too much. The nice thing is of course that it enables proper intersection of the parallax mapped surfaces with other surfaces and allows shadows to react to the parallax effect.

    So far I've been able to adjust the depth value and also adjust the point light shadow caster.

    parallax_test.png

    I'm running into two problems though.

    1. The input tangent vector is always normalized. This is not a problem for the parallax mapping itself, but to transform the adjustment in texture coordinates back into world space I need the mapping size of the texture. (Which would usually be the length of the tangent.) It's currently hardcoded in the shader as 5, because that's the scale in the example above. Is there an option to not normalize the input tangents?

    2. I don't seem to be able to override the light casters other than for a point light. The shader is intended to be deferred only, so it has deferred and shadow caster passes. It also has a forward pass, but that's just to get a preview. I've embedded the code from UnityCG.cginc into the shadow pass, but even if I just set pos_clip.z to 0, I see no change. I can't possibly post the whole shader, because it's spread over many includes. But here's the main part of the shadow caster pass.
    Code (csharp):
    1.  
    2. #ifdef _ALPHATEST_ON
    3. FIXED4 result_diffuse = diffuse_map(_Color, _MainTex, uv0);
    4. clip(result_diffuse.a - _Cutoff);
    5. #endif
    6. #ifdef _HEIGHT_AFFECTS_DEPTH
    7. #ifdef SHADOWS_CUBE
    8. VALUE3 dist = pos_world.xyz - _LightPositionRange.xyz;
    9. return UnityEncodeCubeShadowDepth(length(dist) * _LightPositionRange.w);
    10. #else
    11. if (unity_LightShadowBias.z != 0.0) {
    12.    //VALUE3 light_world = normalize(UnityWorldSpaceLightDir(pos_world.xyz));
    13.    VALUE shadowCos = dot(normal_world, view_world);
    14.    VALUE shadowSine = sqrt(1.0 - shadowCos * shadowCos);
    15.    VALUE normalBias = unity_LightShadowBias.z * shadowSine;
    16.    pos_world.xyz -= normal_world * normalBias;
    17. }
    18. VALUE4 pos_clip = mul(UNITY_MATRIX_VP, float4(pos_world.xyz, 1.0));
    19. pos_clip = UnityApplyLinearShadowBias(pos_clip);
    20. #if defined(UNITY_MIGHT_NOT_HAVE_DEPTH_TEXTURE)
    21. return pos_clip.z / pos_clip.w;
    22. #else
    23. out_ps output;
    24. output.result = 0.0;
    25. output.depth = pos_clip.z / pos_clip.w;
    26. return output;
    27. #endif
    28. #endif
    29. #else
    30. SHADOW_CASTER_FRAGMENT(input)
    31. #endif
    32.  
    I know it's not the easiest code to read. Many defines to switch between shader variants. It works for the SHADOWS_CUBE part for point lights, but the rest of the code doesn't really seem to do much. Are the shadow caster shaders for spot and directional lights done differently in deferred?
     
  2. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    With respect to point 1 I can see a few solutions. None of them is really great.

    Update: I'll go for calculating the uv scale using screen space derivatives. A bit more code than strictly needed, but it saves having to add extra information to the mesh.

    For point 2 I'm still in the dark. My code doesn't seem to be used at all, but if the shadow depth is not calculated in the shadow caster pass, then where is it?
     
    Last edited: Oct 28, 2015
  3. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I've also been able to solve point 2. The issue was that with hardware shadow mapping, I was outputting both a color and a depth. I changed it to output either a color (for cube maps and if there is no hardware shadow mapping) or a depth value (if there is hardware shadow mapping.)

    Normally you should be able to output both a color and a depth value even without MRT, but maybe in this case this isn't true.
     
  4. barneypitt

    barneypitt

    Joined:
    Mar 29, 2018
    Posts:
    4
    Hi. Did you get this to work?

    I would really appreciate it if you could post the shader code for this. I'm trying to get the same thing to work in OpenGL, (non-Unity) and having a working shader to adapt would be a great help.

    I'm less concerned with the shadowing integration (which sounds very Unity-specific), it's how to get the depth writing working in the parallax shader which I'm having trouble with.

    Thanks
     
  5. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,982
    This thread was 3 years old when you posted.
     
  6. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    And the shader code is very Unity specific. Besides that you'll also need a specific filter for the height maps for my implementation, because I'm using Relaxed Cone Stepping. Writing out the depth is probably the easiest part.
     
    Last edited: Mar 30, 2018
  7. barneypitt

    barneypitt

    Joined:
    Mar 29, 2018
    Posts:
    4
    Hi, and thanks for replying.

    I know all about [relaxed] cone step mapping. In fact my algorithm is a novel one which also requires a precomputed map, it should be considerably faster than cone step mapping - O(log(n)) as opposed to O(sqrt(n)). It's related to the Max Mipmap method (e.g. http://www.cs.utah.edu/~zetwal/classwork/InteractiveCG/Report/Site_Mip/Maximum_Mipmaps.html) but somewhat smarter (more info in the map means it never needs to backtrack, and has naturally adaptive accuracy). It's all on paper at the moment but pretty sure it will be very performant.

    But that stuff (the "hard bit"!) isn't troubling me, it's extending the texture beyond its regular bounds which I just don't get - I'm much more au fait with maths than I am with 3D graphics:(

    In any form of displacement mapping, the idea is to aim a line from the camera to a point (x, y, 0) in the texture, find where that line first intercepts the height mapped surface - (x', y') say - and set the output colour at (x, y) to the texture colour at (x', y'). But to extrapolate the parallax-mapped surface beyond the obliquely viewed rectangle's bounds, as you show in your screenshot, one would have to be aiming the line from the camera at a point (x'', y'', 0) which has coordinates x'', y'' outside the texture bounds. I can't see how the shader would ever get called for such a point.

    Also not entirely sure what happens, pipeline-wise, when I override the the texel's output x, y, z. I'm hoping that looking at your shader code would enlighten me on these points (or if you absolutely can't share the code, that perhaps you could identify what I'm misunderstanding and set me on the right track!).

    Thanks
    Barney
     
  8. barneypitt

    barneypitt

    Joined:
    Mar 29, 2018
    Posts:
    4
    Erm, that makes my query less valid, how?
     
    kristoof likes this.