Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Voting for the Unity Awards are OPEN! We’re looking to celebrate creators across games, industry, film, and many more categories. Cast your vote now for all categories
    Dismiss Notice
  3. Dismiss Notice

Fake crystal reflections as seen in Sacred 2

Discussion in 'Shaders' started by Goldensubject, Mar 3, 2018.

  1. Goldensubject

    Goldensubject

    Joined:
    Jun 28, 2015
    Posts:
    8
    Hi everyone.

    I'm trying to reproduce a realtime reflection on a huge amount of crystals for a game i'm working on, close to the effect seen there :
    https://simonschreibt.de/gat/sacred-2-crystal-reflexion/

    Regarding the approach seen in the reference, two questions come to my mind :

    1- How would you grab the last frame rendered by the camera to send it to the shader.

    2- I suppose what happens next would consist in playing with the UV of the grabbed texture and the normals of the crystals, but what could the math looks like ?

    If you have another approach to do something similar i'm all ears too, knowing that :
    1- The crystals will be opaque
    2- Classic planar reflection with secondary cameras rendering at different angles should not be considered as the faces as pointing towards so many different directions.
    3- Realtime reflection probes too because it would mean so much updating of a lot of probes in the scene to me.
    3- I really mean to reflect transparent passes and FX so no SSR?

    Thanks!
     
  2. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    10,982
    Rendertextures?
    I think this is the kind of thing where a lot of magic numbers get into play, since it's fakery at its best. Some sort of offset on the UVs based on the normals.
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,229
    A similar effect was done for Super Mario Galaxy too.

    It's a very old, but effective effect. It's often referred to as a distortion or refraction shader. Most examples in Unity use a grab pass rather than the last frame, but the concept is the same.

    This old (and possibly no longer functional) shader for Unity shows the basics of the implementation.
    http://wiki.unity3d.com/index.php?title=Refraction

    The way you'd grab the last frame would be to add a post process effect who's only purpose is to copy the rendered image into another render texture. I would suggest doing this before most post process effects as you don't want it to be color corrected or bloomed "twice" (once in the previous frame, and again in the current frame). That'll lead to the crystals blowing out as each frame will compound on the last.
     
  4. Goldensubject

    Goldensubject

    Joined:
    Jun 28, 2015
    Posts:
    8
    Thank you for your response, this should work indeed.
    I'll try this and come back here if something doesn't go as intended
     
  5. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,874
    This will show how:



    The spelunky ice crystals do a similar thing, just for 2D
     
  6. Goldensubject

    Goldensubject

    Joined:
    Jun 28, 2015
    Posts:
    8
    Thank you for the Spelunky reference, it helped me to get the RT right.

    For now i'm strugling to get those magic numbers @AcidArrow was talking about.

    Here's what i'm trying to achieve :
    upload_2018-3-9_15-3-11.png

    So I set the rendered image to a global texture so I can pick it in the cube's shader, and now I would like to offset its UVs using the direction the normal is pointing at and a factor determining the length of the vector. I think I got the logic here, but I can't figure out the right math to make this work.

    I don't know how to determinate wich pixel in screen pos has to be offsetted using just the normal and the factor, especially given the fact that the camera is supposed to rotate and move etc.

    Thank you.
     
    MadeFromPolygons likes this.
  7. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,874
    I think the knowledge you need for that is in this tutorial: http://blog.theknightsofunity.com/make-it-snow-fast-screen-space-snow-shader/

    It does a screen space snow effect but applies it based on world space or object space normals to each surface.
     
  8. Goldensubject

    Goldensubject

    Joined:
    Jun 28, 2015
    Posts:
    8
    Not really, I'm not trying to unwrap uv using world space (because that's what it does) but more trying to get wich part of the screen the normal is pointing at

    It would actually more look like a refraction
     
    Last edited: Mar 9, 2018
  9. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,874
    Right but the bit you just said, working out the normal, that tutorial will help you do. No it wont do exactly this 100%, but it will give you enough know-how to determine the normal you want, and from there you will be able to do what you need.

    There are very few shader resources so you will need to learn to retrofit effects from shaders into new ones like this. The techniques in that tutorial can be used for anything, not just snow. Even for non texture related stuff. essentially its letting you decode normal from depth info while still using screen space. sounds like what you need.

    EDIT: added the relevant part for you here:

    Now let’s start with getting the normal:

    Code (CSharp):
    1.  
    2. half4 frag (v2f i) : SV_Target
    3. {
    4.     half3 normal;
    5.     float depth;
    6.     DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.uv), depth, normal);
    7.     normal = mul( (float3x3)_CamToWorld, normal);
    8.     return half4(normal, 1);
    9. }
    Unity documentation says that depth and normals are packed in 16 bits each. In order to unpack it, we need to call DecodeDepthNormal as above seen above.

    Normals retrieved in this way are camera-space normals. That means that if we rotate the camera then normals’ facing will also change. We don’t want that, and that’s why we have to multiply it by _CamToWorld matrix set in the script before. It will convert normals from camera to world coordinates so they won’t depend on camera’s perspective no more.

    EDIT: doing it this way, you wont be doing reflections on anything you cant see properly (so a bit of a boost performance wise) only visible depth+normal can have the effect on it.

    otherwise I dont know how else to help, sorry
     
  10. Goldensubject

    Goldensubject

    Joined:
    Jun 28, 2015
    Posts:
    8
    Oh, I get it now, thank you.
    So basically the trick would consist in converting the normal from world space to screen space (and then something like adding the X and Z multiplied by a factor to the screen UVs maybe ?).
    EDIT : Sorry if I asked for too much of your time
     
    Last edited: Mar 9, 2018
  11. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,874
    Yes something exactly like that will set you on the right path, and after that you can trial and error until it looks correct (which is 90% of writing shaders im afraid XD ) :)

    No never apologise for asking for help! I am always happy to help if i can :)

    Seems you are on the right track now :) update on here on your progress or if you get stuck!
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,229
    All of the depth and normal texture decoding stuff is only useful if you need to use the depth and normals texture ... which you really only need for a full screen image effect. This should not be done as a full screen image effect, it should be done by rendering the objects directly, at which point you have access to the normals (and depth in needed) directly from the shader itself and no "decode" step needs to happen. Plus it's requiring an additional pass of the entire scene to generate the texture if you're not using deferred rendering.

    Unfortunately, I think a lot of what @Daemonhahn has suggested is leading down the wrong path and complicating the effect unnecessarily. The general idea in the video posted is good, but it's implementation and code is almost entirely useless for 3D (and shouldn't even be used for 2D).

    What you want is to convert the world space normal into a view or screen space normal.

    // likely already in your shader some place
    float3 worldNormal = UnityObjectToWorldNormal(v.normal);
    // convert the world normal to view normal by applying the view matrix to the world normal vector
    o.viewNormal = mul((float3x3)UNITY_MATRIX_V, worldNormal);


    Use that as the normal you output from your vertex shader instead of the world normal. That'll have the x and y values of the normal be aligned to the horizontal and vertical axis of the camera view. If you want to use normal maps, you'll need to do that last line in the fragment shader instead after sampling the tangent space normal map and applying the tangent to world matrix to get the world normal. The use of the _Object2World (changed to unity_ObjectToWorld in Unity 5) matrix in the video above is a bit of a hack that can work for 2d games, but using a proper tangent to world matrix would have allowed him to retain batching, and also works in 3d. It works for 2D simply because the world and view axis happen to be aligned.
     
    MadeFromPolygons likes this.