Search Unity

Question Ray Marcher, which Renderpipeline is best?

Discussion in 'General Graphics' started by UlfvonEschlauer, Aug 5, 2021.

  1. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Hello!
    So I am writing a ray marcher for rendering of volume data (e.g. DICOM files). Essentially it uses an inside out box to render front to back (for performance reasons).
    The nature of the beast is that I need to have semi- transparent voxels.
    I originally prototyped the shaders with GLSL in LightWave (where my current application is based) and they work decently and are relatively fast. Now inside LightWave I have no shadow maps in OpenGL and lighting is pretty basic, so I had to raytrace all of the shadows and Ambient Occlusion (see image below):
    GPURaycastingAreaAO06.JPG
    Now LightWave is not a game engine anyway and doing fancier stuff is not really possible.
    I was also hoping that I could get improved performance when moving to Unity and using some of the screen based effects and multi pass shaders with shadowmap support there. I initially did a pretty much straight port of the above shader to Unity, but performance was a lot lower than inside LightWave.
    This is what I managed to get in Unity. Performance of this shader is not as good as the one above, despite having only hard shadows and a much simpler AO approximation (still raytraced). Ignore the FPS count, btw, Unity is totally off the mark there for some reason (wished it was that fast...).
    CaptureUnityShader.PNG

    At first this did not discourage me because I was hoping that shadowmaps and SSAO would make up for that performance loss.
    I started out in the BIRP, but have run into limitations. E.g. the BIRP does not allow for transparent shaders to receive shadowmaps (really? come on!). Of course I did not figure that out until I spent 2 days looking for a mistake that was not on my end :(
    I tried a hack using Alphatest and this is the result. Ugh! Certainly not as pretty as the others...
    CaptureUnityShader02.PNG

    ...and Unity really does not like that idea at all. E.g. it gets all screwy with post processing volumes and directional lights. Then it looks like this and that does not go away until I restart Unity) even after I remove the post processing volumes from the scene)...
    CaptureUnityShader02b.PNG

    I found a thread on the topic that suggested that URP and HDRP might fix the issue of shadows cast onto Transparent geometry. I tried running it in the different pipelines, but it seems like some of the built in functions are not working properly in them (some of them do not even compile, even though there is no error message, ugh). So they will need a lot of extra work before the work correctly.

    So before I go through all the trouble of rewriting my shaders to adapt them to URP or HDRP (and probably spending several more days banging my head over things), does anyone here have any idea which one of the two pipelines would be more suitable for my needs? HDRP might be a bit slower, but it might offer features that are better suited for this making up for that.
    Hope I was able to explain the situation correctly.
    Thanks a lot in advance!
    Ulf
     
    Last edited: Aug 5, 2021
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    So ... I'm going to focus on this one point:
    You'll need to stick to raytracing then, because Unity doesn't support transparent shadows, apart from approximating it with dithered alpha testing in the shadow map. And absolutely doesn't support AO on transparencies. No real time game engine that I know of does AO on transparencies, again apart from using dithered opacity to fake transparency.

    For this reason, real time semi-transparent volume rendering is still something game engines generally shy away from outside of simplified cloud / fog rendering systems where rendering at a very low resolution and blurring the heck out of it won't be obvious. For example the HDRP has support for fog density volumes that react to lighting and shadows, but they're limited to a single channel 32x32x32 resolution volume, likely because anything higher resolution is considered too slow to be useful.


    Now, if you're okay with some minor hackery, the BIRP is actually capable of having semi-transparent objects receive shadows. At least for the main directional light. If you need shadows from additional lights then you will indeed be limited to the URP & HDRP. URP isn't at all documented, and any article you'll find on how to write custom shaders for it are 99% likely to be outdated and no longer compile. The only option there is to go through the shader code yourself to try and figure out how to deal with it. HDRP is in the same situation ... only that code insanely complicated.

    But it might not really matter if you really want SSAO support, because again, transparencies won't work with that at all.

    So the "best" answer for performance might be to look into doing stochastic transparency using an animated blue noise, and make use of TAA built in to the Post Processing Stack. I'd also recommend focusing on making your shader use the BIRP deferred rendering path, which only supports opaque surfaces, but simplifies a bunch of stuff.

    Also I have an explanation for this:
    You're seeing shadow acne. You're probably rendering the depth value into the shadow map, but not taking into account the shadow map biasing. Shadow maps have inherent aliasing / precision issues, which Unity's (and most engines') solution to is to offset the depth value rendered to shadow map slightly away from the light and surface.

    I show a simple example of how to implement it for raytraced sphere in my article here (go to the Shadow Bias section if the link doesn't take you directly to it):
    https://bgolus.medium.com/rendering-a-sphere-on-a-quad-13c92025570c#1820
     
    Last edited: Aug 6, 2021
    april_4_short likes this.
  3. april_4_short

    april_4_short

    Joined:
    Jul 19, 2021
    Posts:
    489
    I dearly hope we live in a world that rewards you, @bgolus, in someway commensurate with the help you provide.

    Cheers!
     
    Torbach78 likes this.
  4. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    @bgolus, thanks so much for the reply. I did try to somehow fiddle your tutorial into what I am doing, but it is just too different. E.g. I am using a front face culled box, instead of a plane (so the camera can go inside the volume). I then offset the volume by 0.5 in each axis to get it to the front of the backfaces (and thus into the right position).

    That said, I don't know exactly why, but moving the tags into the individual passes (and using a geometry queue for the shadow pass) allowed me to get spot light shadowmaps with a fully transparent volume rather than just AlphaTest.

    Another reason why it looked so bad, was actually a flaw in my own code: some samples were out of bounds (which threw the shadowmaps off, but not the raytracer for some reason).
    Note the sphere next to it is actually from your tutorial code... I am using that for comparisons and testing shadows cast onto it. The box is mostly there to test shadows cast by the volume and to ensure that my z- buffer is correct (as can be seen by the intersection).
    CaptureUnityShaderSpotlightShadowmaps.PNG

    I also sort of figured out why my Directional lights are not working right. There is a problem with the orthographic projection. I suppose it is a problem in the vertex shader.
    Even when I used my simple version (that I used with the raytraced shadows) using Unity's functions, the actual rendering is off as well, as you can see here:
    CaptureUnityShaderOrthoProblem.PNG

    My simple vertexshader code (used with the raytraced shadows) looks like this:

    Code (CSharp):
    1. frag_in vert_main (vert_in v)
    2. {
    3.     frag_in o;
    4.     o.vertex = UnityObjectToClipPos(v.vertex);
    5.     o.uv = v.uv;
    6.     o.vertexLocal = ObjSpaceViewDir(float4(v.vertex)).xyz;
    7.  
    8.     //might use the normal at a later point
    9.     o.normal = UnityObjectToWorldNormal(v.normal);
    10.  
    11.     // add 0.5 per axis to move the volume into the center of the box
    12.     o.posWorld = mul(unity_WorldToObject, float4(_WorldSpaceCameraPos, 1.0)) + float4(0.5f, 0.5f, 0.5f, 1.0f);
    13.  
    14.     return o;
    15. }    
    I tried a derivative of your code as well, but I have been fiddling with it for so long (trying anything that I could think of) that I fear it is totally hosed now ( so I am not really showing that since it is sort of pointless).
    I even tried to make a separate vertex shader for the shadow pass, which I am comfortable with having, if it is necessary.
    I don't quite get what I am missing, even with the simple vertex shader above.
    If you have any idea for a vertex shader that I can use, I would be very grateful to you!
    Thanks!
    Ulf
     

    Attached Files:

    Last edited: Aug 6, 2021
    april_4_short likes this.
  5. april_4_short

    april_4_short

    Joined:
    Jul 19, 2021
    Posts:
    489
    It's very comforting to know I"m not the only one going this way about it... "totally hosed" is a great description for where I've found myself after a few sessions of trying to get somewhere over the hill.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The ray setup parts of the vertex and fragment shader I'm using should be completely agnostic to the geometry it's running on. Note I go through a ton of different modifications to the shader from being on a convect box to a camera facing quad without mentioning the ray setup at all. The important part with orthographic views, like those used for the direction light shadows, is you can't rely on the camera position anymore since that's not where the ray is originating from. Read the entire section on shadow casting as that covers orthographic projection, and using
    UNITY_MATRIX_I_V
    instead of
    _WorldSpaceCameraPos
    to get the current view position in a way that works for both the shadow caster and main camera rendering.

    Also adding "+ 0.5" to the ray origin should be totally fine (note: it should be
    + float4(0.5f, 0.5f, 0.5f, 0.0f)
    and not a w of 1.0, though I doubt you're using the w component anyway).

    Queue tags inside multiple passes doesn't do anything at all. The Queue for all passes is determined by the material and not the shader, and only affects how / when it's rendered for the main camera. By default a material will use the first Queue tag it sees in the SubShader, but it can also be overridden on the material. Also the Queue is entirely ignored by Unity for the shadow caster pass. Any shader with a shadow caster pass, regardless of the Queue, will be rendered in shadow maps. Directional shadows are a special beast though because Unity's directional shadows require a proper camera depth texture. Unity generates the camera depth texture by rendering all opaque objects (Queue < 2500 to a screen space depth texture using the shadow caster pass. It then reconstructs the world space position from that depth texture and projects the directional shadows onto that to produce another screen space texture. This is why it doesn't work for transparent surfaces.

    You can work around that by setting the shadow maps to be global textures, and then use custom shadow sampling shader code to sample those shadow maps directly. Though it only works with one active shadow casting directional light at a time. This is something Unity could have added support for themselves, but did not for the BIRP. Though both the URP and HDRP do this for transparent queue materials.

    I rather suspect your material is set to a queue of <2500 even though you changed your shader to use a transparent queue. Otherwise AFAIK Unity will not pass the shadow maps to any shader in a queue >=2500.

    Also, if you look in the game view, I suspect your volume renderer won't be visible anywhere the sky is, or will look weird in the areas that are partially transparent. The Game and Scene views differ in how they render the skybox, with the Scene view rendering the sky before anything else, and the Game view rendering it between the opaque and transparency queues (basically a queue of "2499.5") at the far plane. So anything that doesn't write to the depth buffer, or uses alpha blending over the sky, will disappear or look funny in the Game view.
     
    Last edited: Aug 6, 2021
    Torbach78 and april_4_short like this.
  7. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Thanks for the reply again and thanks for the explanation.
    It is working in the game view as well, but looking more closely, the (few and small) parts that have a opacity below a certain level are indeed invisible :(
    What I am not sure about with your setup is that you are using the pivot, rather than the vertex (the vertices of my box are on the far side of the pivot).
    Ah, so Unity uses the shadowcaster pass rather than the other passes to create the depth texture for the camera (and not the light?). That might explain some of the issues I am seeing besides the projection. My shadow pass uses less samples to save on performance. Is that done differently between deferred and forward rendering? Because it does look a bit different in both.
     
    april_4_short likes this.
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Ah, then you're using forward base pass with
    ZWrite On
    and
    clip()
    , and your forward add pass is using alpha blending. And a material Queue set to < 2500.

    Only for calculating the camera facing quad. The
    worldPos
    near the end doesn't care how it got calculated. If you want to take my shader and replace line 125 with:
    Code (csharp):
    1. float3 worldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0));
    Then replace the
    worldSpacePivotToView
    in line 132 with
    worldSpaceRayDir
    (that's kind of a typo anyway). Lines 77 through 122 are all to handle the quad billboarding and can be deleted at that point (or at least comment out line 122). All the stuff with
    USE_CONSERVATIVE_DEPTH
    needs to be removed for your use case too, as that only works because I am using a camera facing quad that's being moved closer to the camera than the rest of the sphere. You can't use conservative depth when rendering back faces.


    The deferred rendering path gets the depth texture from the deferred pass by copying the depth buffer after the gbuffers have been rendered. The shadow caster pass is only used for the shadow maps when rendering deferred. It's another one of the reasons why deferred might be a better option for you as it means you're not doing the expensive version of the shader twice.


    I'd recommend you look at the Windows > Frame Debugger and step through how Unity is doing stuff so you can get a better understanding of exactly what and when things are happening.
     
  9. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Thanks so much for your help again!

    I actually did that already :)

    OK, makes sense.

    Did that too :)

    Did that too ;)

    True that! I suppose I will have to try to get the deferred rendering to look right first. Right now it is completely hosed, even with the changes you suggested:

    Here is my resulting code for the vertex shader (note that I copied the same over the shadow pass vertex shader as well).
    Code (CSharp):
    1. v2f o;
    2.            
    3.                 UNITY_SETUP_INSTANCE_ID(v);
    4.                 UNITY_TRANSFER_INSTANCE_ID(v, o);
    5.  
    6.                 // check if the current projection is orthographic or not from the current projection matrix
    7.                
    8.                 bool isOrtho = UNITY_MATRIX_P._m33 == 1.0;
    9.  
    10.                 // viewer position, equivalent to _WorldSpaceCAmeraPos.xyz, but for the current view
    11.                 float3 worldSpaceViewerPos = UNITY_MATRIX_I_V._m03_m13_m23;
    12.  
    13.                 // view forward
    14.                 float3 worldSpaceViewForward = -UNITY_MATRIX_I_V._m02_m12_m22;
    15.  
    16.  
    17.                 // calculate world space position for the camera facing quad
    18.                 float3 worldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0));
    19.                
    20.                 // calculate world space view ray direction and origin for perspective or orthographic
    21.                 float3 worldSpaceRayOrigin = worldSpaceViewerPos;
    22.                
    23.                 float3 worldSpaceRayDir = ObjSpaceViewDir(float4(v.vertex)).xyz;
    24.                
    25.                 if (isOrtho)
    26.                 {
    27.                     worldSpaceRayDir = worldSpaceViewForward -dot(worldSpaceRayDir, worldSpaceViewForward);
    28.                     worldSpaceRayOrigin = worldPos - worldSpaceRayDir;
    29.                 }
    30.  
    31.                 o.rayDir = mul(unity_WorldToObject, float4(worldSpaceRayDir, 0.0));
    32.                 o.rayOrigin = mul(unity_WorldToObject, float4(worldSpaceRayOrigin, 1.0))  + float3(0.5f, 0.5f, 0.5f) ;;
    33.                
    34.                 o.pos = UnityWorldToClipPos(worldPos);
    35.                
    36.                 o.uv = v.uv;
    37.  
    38.             return o;
    And here is what I see in the Game View with orthographic projection:
    CaptureUnityShaderOrthoProblem02.PNG

    I don't get to see anything at all with perspective projection in neither view :(
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    That should have been a
    * -
    and not just
    -
    ... going what I said above. Though I actually missed it should only be the
    *
    as the
    worldSpacePivotToView
    and
    worldSpaceRayDir
    are flipped from eachother.

    Here's the modifications I used on the sphere shader.
    Code (CSharp):
    1.             // common vertex function for all passes
    2.             v2f vert (appdata v)
    3.             {
    4.                 v2f o;
    5.  
    6.                 // instancing
    7.                 UNITY_SETUP_INSTANCE_ID(v);
    8.                 UNITY_TRANSFER_INSTANCE_ID(v, o);
    9.  
    10.                 // check if the current projection is orthographic or not from the current projection matrix
    11.                 bool isOrtho = UNITY_MATRIX_P._m33 == 1.0;
    12.  
    13.                 // viewer position, equivalent to _WorldSpaceCAmeraPos.xyz, but for the current view
    14.                 float3 worldSpaceViewerPos = UNITY_MATRIX_I_V._m03_m13_m23;
    15.  
    16.                 // view forward
    17.                 float3 worldSpaceViewForward = -UNITY_MATRIX_I_V._m02_m12_m22;
    18.  
    19.                 // calculate world space position for the camera facing quad
    20.                 float3 worldPos = mul(unity_WorldToObject, float4(v.vertex.xyz, 1.0));
    21.  
    22.                 // calculate world space view ray direction and origin for perspective or orthographic
    23.                 float3 worldSpaceRayOrigin = worldSpaceViewerPos;
    24.                 float3 worldSpaceRayDir = worldPos - worldSpaceRayOrigin;
    25.                 if (isOrtho)
    26.                 {
    27.                     worldSpaceRayDir = worldSpaceViewForward * dot(worldSpaceRayDir, worldSpaceViewForward);
    28.                     worldSpaceRayOrigin = worldPos - worldSpaceRayDir;
    29.                 }
    30.  
    31.                 // output object space ray direction and origin
    32.                 o.rayDir = mul(unity_WorldToObject, float4(worldSpaceRayDir, 0.0));
    33.                 o.rayOrigin = mul(unity_WorldToObject, float4(worldSpaceRayOrigin, 1.0)) + 0.5;
    34.  
    35.                 o.pos = UnityWorldToClipPos(worldPos);
    36.  
    37.                 return o;
    38.             }
    39.  
    40.      // later on in the fragment shader
    41.  
    42.                 // ray sphere intersection (setting the origin at 0.5)
    43.                 float rayHit = sphIntersect(rayOrigin, rayDir, float4(0.5,0.5,0.5,0.5));
    44.  
    45.      // ...
    46.  
    47.                 // calculate object space position from ray, front hit ray length, and ray origin (offset back to centered on 0,0,0)
    48.                 float3 surfacePos = rayDir * rayHit + rayOrigin - 0.5;
    Note the fragment shader I make a ton of assumptions about the sphere being at object space 0,0,0 for things like the normals, and for calculating the final depth offset with the object to world transform. So I offset the final "surface position" back to account for the ray origin being offset.
     
  11. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Thanks so much for your help and patience again!
    We are getting close!
    So if I use your vertex shader for the shadowcaster pass only, I finally get some directional shadows! Yay!
    Note that for some reason, I have to multiply the worldPos by 10 for things to show up correctly. This could be an issue in my c# code somewhere. I believe it could have to do with the fact that I am scaling the parent object by a factor of 10 (1unit = 1 meter is just too small and that is causing issues with all sorts of things). The weird thing is that I do not have to do that multiplication in my original vertex shader... Hmmmm.
    CaptureUnityShaderOrthoProblem04.PNG

    Of course when I use this for the shadowcaster vertex shadow only, I still do not get the correct result in orthographic views (duh!). The problem is that when I use your latest vertex shader in the base pass, everything gets weird again...
    With a directional light, things now look like this:
    CaptureUnityShaderOrthoProblem05.PNG

    And with a spotlight, my volume object disappears completely (still seems to be casting a correct shadow though).
    That is in both forward and deferred render modes and in both perspective and ortho views (strangely the ortho projection now looks correct, at least?)
    Most bizarre...
    Note that I did not make any changes to my fragment shader code, as I don't think I need to offset things again like you did in your fragment shader (otherwise why offset things in the vertex shader in the first place?).
    In any case, we are getting really close here!
     
    april_4_short likes this.
  12. andybak

    andybak

    Joined:
    Jan 14, 2017
    Posts:
    569
    I'm am so bookmarking this thread...
     
    april_4_short likes this.
  13. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Are you using the default Cube mesh? And how are you calculating the output depth? If you make the above changes to the sphere shader, does it still work for you?
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    My sphere shader is setup to make sure the in shader calculated position is explicitly in the object space of the mesh being rendered. This lets me use the
    unity_ObjectToWorld
    to get the world space position which properly accounts for any kind of scaling the mesh might have. This makes me think you’re not doing that in your shader and are relying on some specific scale expectations.

    It’s all floating point math, so stuff in a scale of 1:1 vs 10:1 doesn’t really change anything. The relative accuracy actually is better when using a 1:1 scale. This again makes me think you have some hard coded scale factors in your shader.
     
  15. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Thanks so much again for taking the time to reply! I really appreciate all your help! People like you are what make Unity great!

    The mesh itself is a simple cube with the default extents. But it is parented to an empty object (with some code on it) that is scaled to the correct dimensions of the volume (and handles the rotation of the volume as well).
    E.g, for this volume the scale is 2.5, 2.5, 3.21.
    That is 10 times the actual volume dimension (when assuming that 1 unit == 1m), but Unity just is a pain at these small scales e.g. shadows too blurry, navigation is too coarse (Unity Devs! It would be really nice if we had more ways to adjust these things to scene scale!).
    Not sure if that makes any difference.

    In the volume rendering shader itself, there are only two Uniforms that could be affected by the scale- up (other than the code in the vertex shader maybe). One is sampling distance along the ray. The other is used to calculate the normals. Neither should have any influence on the projection.

    So the basic code for finding the sample position is this:
    Code (CSharp):
    1. currentSamplePosition += rayDirection /*-normalize(i.rayDir)*/ * sampleDistance
    (I am rendering front to back for obvious reasons).

    The actual ray start is computed from a simple ray- AABB intersection.
    Code (CSharp):
    1. IntersectRayAABB2( cameraLocalPos /*i.rayOrigin*/, rayDirection /*-normalize(i.rayDir)*/, start /*returns distance to front of BB along ray*/);
    2. float3 rayStartPos = cameraLocalPos + rayDirection * start;
    3.  
    Code for the world/clip coordinates for the lighting calculations:

    Code (CSharp):
    1.     float3 worldPos = mul(unity_ObjectToWorld, float4(currentSamplePosition, 1.0) - float4(0.5f, 0.5f, 0.5f, 0.0f));
    2.                             float4 clipPos = UnityWorldToClipPos(float4(worldPos, 1.0));
    3.  
    4. /* this part is pretty much your code*/
    5. #if defined (SHADOWS_SCREEN)
    6.                            // setup shadow struct for screen space shadows
    7.                            shadowInput shadowIN;
    8.                      
    9.                            #if defined(UNITY_NO_SCREENSPACE_SHADOWS)
    10.                            // mobile directional shadow
    11.                      
    12.                            shadowIN._ShadowCoord = mul(unity_WorldToShadow[0], float4(worldPos, 1.0));
    13.                            #else
    14.                            // screen space directional shadow
    15.                            shadowIN._ShadowCoord = ComputeScreenPos(clipPos);
    16.                            #endif // UNITY_NO_SCREENSPACE_SHADOWS
    17.                      
    18.                            #else
    19.                            // no shadow, or no directional shadow
    20.                            float shadowIN = 0;
    21.                            #endif // SHADOWS_SCREEN
    22.  
    This is the code to calculate the depth:
    Code (CSharp):
    1. outDepth  = localToDepth(depthwriteposition /* a currentSamplePosition */ - float3(0.5f, 0.5f, 0.5f));
    Hope that was explanatory enough.
    Thanks again!
     
    Last edited: Aug 8, 2021
  16. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Shadows being blurry can be modified using the shadow distance and cascade ratios.
    https://docs.unity3d.com/Manual/shadow-distance.html

    Navigation in the scene view can be tweaked in the camera settings. Though there's also a kind of implicit "zoom" scale that is adjusted with the mouse wheel or pressing F to focus on the bounds of a selection that changes some other speeds.
    upload_2021-8-7_23-46-35.png
    And the game view is whatever you code it to do.

    But yes, the defaults tend to be designed more around quick navigation around a human scale world rather than singular objects. There are some hard set limits, like the camera near plane, that can make very small scale stuff at 1 unit :1 meter harder to do. But the scale is also totally arbitrary so there's nothing really wrong with going with a different scale if it makes things easier to deal with.


    Really my concern with the scaling was if you're having to do manual scaling in he shader then there's something that's not matching the other expectations we're trying to make in the shader. Specifically that your "local" space is a normalized 1 unit range, either between -0.5 and +0.5 or 0.0 and 1.0. If you arbitrarily scale the sphere shader, or have it be a child object of a parent and either are arbitrarily scaled, it "just works"* (in terms of the raytracing, the quad scaling and conservative depth stuff can fail when there's shearing caused by the child object being rotated and scaled under a scaled parent).

    Now there's nothing in the above code you showed that tells me why that might be, but I really do wonder if your cube mesh isn't the correct scale. Or if you're doing some implicit scaling in the raymarching part.

    Another thought would be if you're going to use a custom mesh, have it go from a 0.0 to 1.0 range to begin with, and let the parent game object be at the mid point for sanity. That should get rid of all of the offsetting.

    You do need to do that - 0.5 offset. Because otherwise the positions are not in the mesh's actual object space, and the transforms using
    unity_ObjectToWorld
    rely on that matching. Though the offset mesh would fix that.
     
  17. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Thank you for the complements.

    However while I might be the most prolific poster in this particular area of Unity, but I am not the only one here. And there are a few even more prolific posters in the rest of the forum. They just focus on the other parts of the forum.

    The graphics subforums used to be the domain of a few past prolific posters whom have since moved on, as I'm sure I will at some point as well. And I hope others will try to fill in when that eventually happens.

    I currently reside in the Seattle area, and while I don't currently have plans to visit motor city, I'd happily take you up on that offer if I ever do.
     
    andybak likes this.
  18. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Thanks again for the reply!
    I am aware of those settings, but there are lower limits to them. E.g. the Camera Speed setting can not be lower than 0.01.
    That is a problem while I am trying to navigate through the volume in the Editor window and it would likely bite me in the behind once (if) I get to implement my more complex tools like segmentation and measurement.

    OK, that might be a source of the issues and I have been wondering about that too.
    I mean, I am not doing any explicit scaling in the shader, but as I mentioned earlier, the parent object is getting scaled to the volume dimensions (multiplied by 10 anyway). Maybe UNITY_MATRIX_I_V does not account for the parent's scaling factor?

    Having trouble following you there...
     
  19. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Follow up. Did a test without the parent object and it still looks the same, even when I reset scale and rotation to 1 it does not change anything.
     
  20. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The
    UNITY_MATRIX_V
    and
    UNITY_MATRIX_I_V
    are the world to view, and view to world matrices. They explicitly never have any scale*, only the
    unity_ObjectToWorld
    and
    unity_WorldToObject
    matrices should have any scale. And that scale is the total accumulation of all of the transforms in that game object’s hierarchy, including possibly an additional transform from the mesh import setting.

    * For the default rendering. You can manually override the matrix view from c# in some cases, at which point it can be anything at all.

    If to calculate the output depth you’re using the
    unity_ObjectToWorld
    matrix, or the
    ObjectToClipPos()
    function (which uses the object to world matrix), then the position you start with must be in local space. If you offset the ray start position, you have to offset the resulting position back otherwise it’s offset from the local object space.

    You’re doing the offset in the shader because the “object space” for most cube meshes have the vertices centered around the pivot, but textures are sampled in a 0.0 to 1.0 UV range.

    If the mesh’s vertex positions weren’t centered on the pivot, but instead one vertex was at 0,0,0 and the others were offset in the positive axes, there’d be no need for the in shader offset as the “local space” positions would always be within that 0.0 to 1.0 range.
     
  21. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    So what does that mean for me now?
    I mean, is there anything I have to do beyond what I am doing already? What else should I try to make this work?

    But I AM offsetting the sample positions back and I am using localToDepth()

    Code (CSharp):
    1.  outDepth  = localToDepth(depthwriteposition - float3(0.5f, 0.5f, 0.5f));
     
  22. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The image you have above of the directional shadow map "working" with my vertex shader, but the object being invisible ...

    Is showing that the shadow caster pass is outputting the correct depth for both the orthographic shadow map and perspective camera depth texture. The plane there is showing the shadows of the face because the depth in the camera depth texture is correct.

    This makes me think there's some difference between the shadow caster and forward pass's fragment shader in terms of how it's calculating the depth.

    Then I misunderstood this statement.
    This comment I took to mean you weren't doing the - 0.5 offset for the forward base pass. If you are, then I don't know what the difference is. Though you also don't specify exactly what that
    localToDepth()
    function is doing.


    One last thing, I asked earlier if you were using the default cube. Explicitly the default cube mesh, as that has a known vertex range and scale. Your response was it was a "simple cube with the default extents", but that doesn't tell me if that's the default cube mesh, or one that's been made in an external application and imported, or one that's built from script. If it's one that's been imported, unless you're very careful with unit scales that mesh may have a hidden scale factor that's not reflected by game object transforms. Which is why I asked if it was the default cube mesh.
     
  23. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Thanks for the reply, once again!

    Sorry, yes localToDepth() is actually a function I stole from elsewhere, hehehe ;)
    Code (CSharp):
    1.  
    2. float4 clipPos = UnityObjectToClipPos(float4(depthPosition /* currentsampleposition - 0.5f */, 1.0f));
    3. return clipPos.z / clipPos.w;
    4.  
    I know that this function should be correct since my volume is z- blended correctly with the rest of the geometry in all my other versions of the shader (in perspective view mode anyway). Also, this function is pretty much the same thing you are doing anyway, just skipping the step of transforming to world position first.
    In hindsight, I could probably eliminate it, since I already the have clip position calculated earlier in the code above.

    Now, I figured out a few more things.
    Here is my current vertex shader code again for reference:

    Code (CSharp):
    1.             v2f o;
    2.    
    3.             // instancing
    4.             UNITY_SETUP_INSTANCE_ID(v);
    5.             UNITY_TRANSFER_INSTANCE_ID(v, o);
    6.  
    7.             // check if the current projection is orthographic or not from the current projection matrix
    8.             bool isOrtho = UNITY_MATRIX_P._m33 == 1.0;
    9.  
    10.             // viewer position, equivalent to _WorldSpaceCAmeraPos.xyz, but for the current view
    11.             float3 worldSpaceViewerPos = UNITY_MATRIX_I_V._m03_m13_m23;
    12.  
    13.             // view forward
    14.             float3 worldSpaceViewForward = -UNITY_MATRIX_I_V._m02_m12_m22;
    15.  
    16.             // calculate world space position for the camera facing quad
    17.             float3 worldPos = mul(unity_WorldToObject, float4(v.vertex.xyz, 1.0)) *10.0f;
    18.    
    19.             // calculate world space view ray direction and origin for perspective or orthographic
    20.             float3 worldSpaceRayOrigin = worldSpaceViewerPos;
    21.             float3 worldSpaceRayDir = worldPos - worldSpaceRayOrigin;
    22.    
    23.             if (isOrtho)
    24.             {
    25.                 worldSpaceRayDir = worldSpaceViewForward * dot(worldSpaceRayDir, worldSpaceViewForward);
    26.                 worldSpaceRayOrigin = worldPos - worldSpaceRayDir;
    27.             }
    28.  
    29.             // output object space ray direction and origin
    30.             o.rayDir = mul(unity_WorldToObject, float4(worldSpaceRayDir, 1.0));
    31.             o.rayOrigin =( mul(unity_WorldToObject, float4(worldSpaceRayOrigin, 1.0)) + float4(0.5f, 0.5f, 0.5f, 0.0f) );
    32.             o.pos = UnityObjectToClipPos(v.vertex );
    33.    
    34.             o.uv = v.uv;
    35.  
    36.             return o;
    Now, when I replace just the line
    Code (CSharp):
    1. o.rayDir = mul(unity_WorldToObject, float4(worldSpaceRayDir, 1.0));
    with this line:
    Code (CSharp):
    1. o.rayDir = ObjSpaceViewDir(float4(v.vertex)).xyz;
    My perspective views work correctly (but the ortho views do not).
    So I have focused my recent experimentation around the code for the rayDir and did a lot of experiments.
    Turns out that the o.rayDir is inverted!
    If I multiply it by -1 I can see my volume again. BUT it is still not correct.
    It looks strangely squished and slightly smaller compared to the shadow volume (and my reference shaders). It also gets distorted a bit with the view direction:
    CaptureUnityShaderOrthoProblem06.PNG
    This looks the same in both perspective and ortho views...

    Edit: I also figured out that the distortions and scaling are definitely related to the scaling of the volume object (which really is just a Unity cube that I am loading from a prefab and parent to the scaled and rotated parent object).
    Anyway, if I unparent it and reset the scale to 1:1:1 and remove the *10.0f from the WorldPos, then the rendered volume fills out the cube perfectly (as it should).
    Of course it is squished then, but that is expected.
    CaptureUnityShaderOrthoProblem06b.PNG

    However when I rotate it by some angle, it once again gets wildly distorted in all directions.
    CaptureUnityShaderOrthoProblem07.PNG
    Maybe this new information will give you an idea what else could be off here? Meanwhile I will keep trying some more things.
    Thanks so much again!
    Ulf
     
    Last edited: Aug 10, 2021
    Torbach78 likes this.
  24. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    I guess bgolus has given up on me :(
    Anyone else who has any idea what I am doing wrong here?
     
  25. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The main thing is the shadow caster pass is appears to be working properly. So keep comparing the forward pass and the shadow caster pass and find the difference. Then make the forward pass work like your shadow caster pass.

    I can't really offer any other advice.
     
  26. UlfvonEschlauer

    UlfvonEschlauer

    Joined:
    Dec 3, 2014
    Posts:
    127
    Just to give a bit of an update for the few that are watching this thread:

    I have put the whole shadow map thing aside for a bit. It just was too frustrating. So instead, I focused on quality (goal is cinematic quality on moderate hardware) and my raytraced shadows, which now perform very well also. I have added post filtering for the shadows, which allows me to get away with less samples. Ambient occlusion is also rully raytraced.
    Everything is real time. There is no pre- processing of anything. Only the raw dataset (and other scene content of course) is held in memory.
    These are running at about 45 to 65 frames per second (depending on dataset and settings) on a GTX 1060. That is not much slower than the shadow maps and it looks much better.
    I also still have some optimizations that I can do/try:

    Capture35.jpeg

    Capture30.jpeg
    Translucency and scattering (exaggerated) and light from behind/slightly above:
    Capture34b.jpeg
     
    Torbach78, PutridEx and bgolus like this.
  27. Pepe-Hoschi

    Pepe-Hoschi

    Joined:
    Feb 27, 2016
    Posts:
    8
    This shader excerpt will work fine for raymarching within a box with an orthographic/perspective camera (tested in legacy).
    Hope it help others...

    Code (CSharp):
    1. VS:
    2. bool isOrtho = UNITY_MATRIX_P._m33 == 1.0;
    3. o.camPosLocal = mul(unity_WorldToObject, float4(_WorldSpaceCameraPos, 1));
    4. float3 viewDir = UNITY_MATRIX_IT_MV[2].xyz;
    5. o.objectRayDir = isOrtho ? normalize(viewDir) : normalize(o.camPosLocal.xyz-v.vertex.xyz );      
    6. o.objectRayOrigin = v.vertex.xyz + float3(0.5f, 0.5f, 0.5f);
    7.  
    8. PS:
    9. #define NUM_STEPS 256
    10. const float stepSize = (1.732f) / NUM_STEPS;
    11. float3 rayDir = i.objectRayDir;
    12. float3 rayStartPos = i.objectRayOrigin;
    13. rayStartPos += rayDir * stepSize * NUM_STEPS;
    14. rayDir = -rayDir;
    15.  
    16. //Raymarch        
    17. for (uint iStep = 0; iStep < NUM_STEPS; iStep++)
    18. {
    19.    const float t = iStep * stepSize;
    20.    p = rayStartPos + rayDir * t;
    21.    d = sampleSDF(p); //Samples a Volume Texture at the given coordinate
    22.    if (d > _MinVal && d < _MaxVal)
    23.      { ... do stuff here... }
    24.  
     
    UlfvonEschlauer likes this.