Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Need to write Normals, Depth and object tags to disk

Discussion in 'Shaders' started by unitarian411, Mar 12, 2015.

  1. unitarian411

    unitarian411

    Joined:
    May 30, 2012
    Posts:
    13
    Working on a simulation where I would like to output (per frame, but sim doesnt need to run realtime)

    1- a depth map for the given camera
    2- a normal map for the given camera
    3- some kind of index based on tags to allow identification of object types in an output bitmap

    How would I do this ?
    a) Using a script I attach to the camera that has a OnRenderImage
    b) and in that method I assign/run 3 shaders (one each for each of the three types of desired output)
    c) and the shaders manipulate ...erm..output color ? which is then picked up
    d) by the OnRenderImage which can extract PNG data from the RenderTexture and write a png file to disk ?

    I think I understand how to do a,b, and d.
    But not c - I have read online that there is a g-buffer which holds depth, normals etc. and that there might be globals accessible to the fragment shader called CameraGBufferTexture0, ...CameraGBufferTexture3
    but there is really no documentation and in Unity5 (only one i tried) CameraGBufferTexture0 etc . globals dont seem to exist for the shaders.

    If this is indeed the correct pseudo code, could someone provide links or hints re c

    If its wrong or there is a simpler solution, would appreciate help also.

    Will post results once I get it working.


    PS:
    For 3, I could imagine creating certain tags (maybe named "T_NNN" where NNN is the index I would like to see appear on the pixel in that output where that tagged object rendered. Since images have to have a integer pixel value....). Can a shader access object tags ?
     
  2. aubergine

    aubergine

    Joined:
    Sep 12, 2009
    Posts:
    2,864
    You do that with replacement shaders. Read the documents for those and check the examples.
    If you need a better example, check one of my Glow Per Object, Desaturate Per Object, Pixelate Per Object packages in the assetstore.

    However, there are some inconsistencies with the unity 5 standard, new unlit and old legacy shaders RenderType tags. So, you should pay attention to those as well.

    EDIT: You dont have to create extra tags, RenderType tag is already there in all standard unity shaders.
     
  3. unitarian411

    unitarian411

    Joined:
    May 30, 2012
    Posts:
    13
    This is work in progress, but what i have so far:

    Steps:

    1. create a new RenderTexture

    2. create a new Camera
    2.1. Set Cameras Target Texture to the new Render Texture created in 1

    3. create a emitGBufferBits.cs (just below)
    I have a quick hack to write out the normal map, depth map and rgb image for just the first frame. Obviously this could be extended to record all or every n-th frame etc.
    3.1. add this script to the camera created in 2.
    using UnityEngine;
    using System.Collections;
    using System.IO;

    [RequireComponent (typeof(Camera))]
    public class emitGBufferBits : PostEffectsBase
    {
    public Shader screenDepthShader;
    public Shader normalsShader;

    Material normalsMaterial;
    Material screenDepthMaterial;
    int n = 0;

    new void Start ()
    {
    GetComponent<Camera>().depthTextureMode = DepthTextureMode.DepthNormals;
    CheckResources ();
    }


    public override bool CheckResources ()
    {
    CheckSupport (false);

    normalsMaterial = CheckShaderAndCreateMaterial (normalsShader, normalsMaterial);
    screenDepthMaterial = CheckShaderAndCreateMaterial (screenDepthShader, screenDepthMaterial);

    if (!isSupported)
    ReportAutoDisable ();
    return isSupported;
    }



    void OnRenderImage (RenderTexture source, RenderTexture destination)
    {
    if ( CheckResources() ) {
    Graphics.Blit(source,destination,normalsMaterial);
    saveFrame( destination ); // write normals

    // could we not just use source.depthBuffer directly here without the extra shader & blit ? How ?
    Graphics.Blit(source,destination,screenDepthMaterial);
    saveFrame( destination ); // write depth buffer
    }

    Graphics.Blit (source, destination);
    saveFrame( destination ); // write RGB image
    }



    void saveFrame( RenderTexture renderTexture )
    {
    if( renderTexture != null && n<3)
    {
    // RenderTexture is fast, onboard GPU. Lets get it into main memory using Texture2D
    RenderTexture.active = renderTexture;
    Texture2D texture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
    // ReadPixels reads from the currently active RenderTexture
    texture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
    texture.Apply();

    byte[] bytes = texture.EncodeToPNG();
    string filePath = Application.dataPath + "/../SavedScreen_" + n + ".png";
    n++;
    Debug.Log ( "sensor snapshot saved to " + filePath );
    File.WriteAllBytes( filePath, bytes );
    }
    }
    }



    4. create a new shader "ScreenDepth" (just below)
    4.1. set emitGBufferBits script parameter "Screen Depth Shader" to this shader
    Shader "Custom/ScreenDepth"
    {
    SubShader
    {
    Pass
    {
    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag
    #include "UnityCG.cginc"

    sampler2D _CameraDepthTexture;

    struct v2f {
    float4 pos : SV_POSITION;
    float4 scrPos:TEXCOORD1;
    };

    v2f vert (appdata_base v)
    {
    v2f o;
    o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
    o.scrPos=ComputeScreenPos(o.pos);
    #if defined (UNITY_UV_STARTS_AT_TOP)
    // For DirectX-like systems flip vertical
    o.scrPos.y = 1 – o.scrPos.y;
    #else
    // For OpenGL-like systems do nothing
    #endif
    return o;
    }


    half4 frag (v2f i) : COLOR
    {
    float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);
    return half4( depthValue, depthValue, depthValue, 1 );
    }

    ENDCG
    }
    }
    }


    5. create a new shader "Normals" (just below)
    5.1. set emitGBufferBits script parameter "Normals Shader" to this shader
    Shader "Custom/Normals"
    {
    SubShader
    {
    Pass
    {
    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag
    #include "UnityCG.cginc"

    sampler2D _CameraDepthNormalsTexture;

    struct v2f {
    float4 pos : SV_POSITION;
    float4 scrPos:TEXCOORD1;
    };

    v2f vert (appdata_base v)
    {
    v2f o;
    o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
    o.scrPos=ComputeScreenPos(o.pos);
    #if defined (UNITY_UV_STARTS_AT_TOP)
    // For DirectX-like systems flip vertical
    o.scrPos.y = 1 – o.scrPos.y;
    #else
    // For OpenGL-like systems do nothing
    #endif
    return o;
    }


    half4 frag (v2f i) : COLOR
    {
    float3 normalValues = DecodeViewNormalStereo(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy));
    return float4(normalValues, 1);
    }

    ENDCG
    }
    }
    }



    6. Point the camera you created in 2 at some interesting scene with objects, and adjust its Near & Far clipping planes so they reasonably bound the objects (Depth between Near and Far will be mapped to 0..1 so make the most of it)
    Press Play and you should see console Logs regarding three images being written to disk.

    Here is an overview of the scene and a red "iPad" with a screen showing the Render Texture created in 1 and the camera created in 2 (on the back of the iPad looking out)
    Screen Shot 2015-03-12 at 12.14.57 AM.png

    This is the normal file emitted (SavedScreen_0.png) SavedScreen_0.png

    Here is the Depth image (SavedScreen_1.png)
    SavedScreen_1.png

    and finally the RGB image (SavedScreen_2.png)
    SavedScreen_2.png


    Some observations:
    • the normals are in view ("tangent" ?) space. Not world space. On second thought I actually want that
    • the RGB image is strangely underexposed compared to what i see in the editor screenshot
    • I wish there was a way to write out depth and normal components in something higher precision than 0..255 (like a float !)
    Anyway, I promised to give an update if I have something working. Please feel free to suggest architectural, speed, syntax etc improvements, I am quite clueless when it comes to shaders.

    Thanks to Willy Chyr's 3 part article on "Unity Shaders - Depth and Normal Textures" where I picked most of this from.
     
  4. aubergine

    aubergine

    Joined:
    Sep 12, 2009
    Posts:
    2,864
    Well, here is a clue. Read my first comment again about replacement shaders.
     
  5. unitarian411

    unitarian411

    Joined:
    May 30, 2012
    Posts:
    13
    Hey Aubergine, I got the notification email about your post after I posted my update. Thanks for your reply, will check into replacement shaders !