Search Unity

How to make displacement of worldPos values in vertex shader?

Discussion in 'Shaders' started by Artromskiy, Dec 25, 2020.

  1. Artromskiy

    Artromskiy

    Joined:
    Oct 28, 2019
    Posts:
    7
    I want to make a shader that will have an offset in world coordinates, but only in the fragment part. Tried to do it by using a standard offset in the vertex shader and changing parameters in the generated shader, but it doesn't help (artifacts with the direction of lighting). Here's some pseudocode of surface shader:
    Code (HLSL):
    1. struct Input
    2. {
    3.     float2 uv_MainTex;
    4. };
    5. float _Amount;
    6.  
    7. void vert(inout appdata_full v)
    8. {
    9.     v.vertex.xyz += v.normal * _Amount;
    10. }
    11.  
    12. sampler2D _MainTex;
    13.  
    14. void surf(Input IN, inout SurfaceOutput o)
    15. {
    16.     o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb;
    17. }
    Then in generated shader I changed so:

    Code (HLSL):
    1. struct Input
    2. {
    3.     float2 uv_MainTex;
    4. };
    5. float _Amount;
    6.  
    7. float3 vert(inout appdata_full v)
    8. {
    9.     return v.vertex.xyz + v.normal * _Amount;
    10. }
    11.  
    12. sampler2D _MainTex;
    13.  
    14. void surf(Input IN, inout SurfaceOutput o)
    15. {
    16.     o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb;
    17. }
    Result of vert was applied to vertex output of world position:

    Code (HLSL):
    1.  
    2.   //Some generated code
    3.   float3 newVert = vert (v);
    4.   o.pos = UnityObjectToClipPos(v.vertex);
    5.   o.pack0.xy = TRANSFORM_TEX(v.texcoord, _MainTex);
    6.   float3 worldPos = mul(unity_ObjectToWorld, newVert).xyz;
    7.   //Some generated code
    8.  
    Any ideas how to make it easier or just to make it work?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    What do you mean by “only the fragment part”?
     
  3. Artromskiy

    Artromskiy

    Joined:
    Oct 28, 2019
    Posts:
    7
    I want to render cube (for example) with same size, but modify worldPos in fragment shader using heightmap, so it will be used to calculate light attenuation and used in not modified cube. So I don't want to modify real vertex position, but send modified position for light.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Sounds like what you're looking for is a parallax occlusion mapping (aka POM) or relief mapping shader. They're basically the same technique with subtle differences in some of the finer implementation details, but the short version is you give it a height map and an offset scale and it gives you the appearance of geometric depth on a flat surface. You can try to implement your own version of this, or you can give this asset a go:
    https://assetstore.unity.com/packages/vfx/shaders/uber-standard-shader-ultra-39959

    Amplify Shader Editor also has a built in node for this if you want to try doing it with a node base shader editor.

    Note, if you find something called just "Parallax Mapping" or "Parallax Offset Mapping", this isn't really the same thing. Parallax Offset Mapping is what Unity's built in Standard shader does when you give it a height map. It's a very gross approximation that gives a surface some movement that is kind of sort of like it has height, but is also good at making it look like the surface is made out of goopy paint that's mixing badly.
     
    Last edited: Dec 26, 2020
  5. Artromskiy

    Artromskiy

    Joined:
    Oct 28, 2019
    Posts:
    7
    Looks like it's POM, thank you very much. Is there any built-in function in cginc to calculate light in fragment shader by putting worldPos? And do I need to use vertex and fragment shader or this could be done with simple surface shader?
     
  6. Artromskiy

    Artromskiy

    Joined:
    Oct 28, 2019
    Posts:
    7
    Basically, I want to make a shader for the pixel art.
    It is a bad idea to use a real geometry change, since in the end there will be a very large number of polygons on a 32x32 tile, and there will be many tiles. And the lighting attenuation itself would be desirable to calculate only for each pixel on the texture (32x32), and not an interpolated pixel on the screen. As a result, there are a lot of problems and I am already starting to think whether it is possible to implement this using a particle system or unity vfx. Any advice?
    Update: found your reply about lighting https://forum.unity.com/threads/the-quest-for-efficient-per-texel-lighting.529948/
     
    Last edited: Dec 27, 2020
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Yeah, this is outside the realm of Surface Shaders. You can do basic offset mapping or even POM in Surface Shaders, but the real geometry position for lighting and depth will always just be the geometry surface as you don’t have access to the data to modify that.

    If you’re looking to do something like “3D pixel” style art, you’re talking about voxels. You probably want to look at voxel or cube stepped ray marching. It makes something like POM way cheaper as you have a known, fixed step size for the ray marching, and don’t have to deal with the additional nebulousness of bilinear or trilinear filtering.