Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct here to familiarize yourself with the rules and how to post constructively.

  2. Unity 2022.1 is now available as the latest Tech release.
    Dismiss Notice

Good Soft Particle Diffuse Transparent, but could be better?

Discussion in 'Shaders' started by RedRiverStudio, Apr 14, 2020.

  1. RedRiverStudio


    Sep 8, 2014
    I made a shader to pseudo-accurately pick up scene lights for my particle smoke. It is a version of the Legacy Diffuse Transparent, but ramrodded with another shader's soft particle effect.

    I am really happy with how it looks, feel free to use it if you need something similar.

    My question is: Can this shader be improved and optimized? I don't know what I am doing and this was created by lots of trial and error.

    I had to bump the shader model up to 4.0 to avoid the 10 instruction limit. This shader has 11. Also I had to comment out the SOFTPARTICLES_ON because for whatever reason, it never knows when unity has this option clicked, so I have to force it on all the time. If you have a particular question as to why I added a line or made a choice, the answer will typically be "I have no idea, its the only thing that worked".

    Code (CSharp):
    1. Shader "Vector/DiffuseSoft" {
    2.     Properties{
    3.         _Color("Main Color", Color) = (1,1,1,1)
    4.         _MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
    5.         _InvFade("Soft Particles Factor", Range(0.01,1.0)) = .02
    6.     }
    7.     Category{
    8.         Tags{ "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
    9.         LOD 200
    10.         ZWrite Off
    12.         SubShader{
    14.                 CGPROGRAM
    15.                 #pragma surface surf Lambert alpha:fade
    16.                 #pragma vertex vert
    17.                 #pragma multi_compile_particles
    19.                 #pragma target 4.0
    20.                 #include "UnityCG.cginc"
    22.                 sampler2D _MainTex;
    23.                 fixed4 _Color;
    26.                 struct Input {
    27.                     float2 uv_MainTex;
    28.                     float4 color : COLOR;
    29.                     float2 texcoord : TEXCOORD0;
    30.                     //#ifdef SOFTPARTICLES_ON
    31.                             float4 projPos : TEXCOORD2;
    32.                     //#endif
    33.                 };
    36.                 void vert(inout appdata_full v, out Input o)
    37.                 {
    38.                     o.uv_MainTex = v.texcoord;
    39.                     o.color = v.color;
    40.                     o.texcoord = v.texcoord;
    41.                     //#ifdef SOFTPARTICLES_ON
    42.                             float4 hpos = UnityObjectToClipPos(v.vertex);
    43.                             o.projPos = ComputeScreenPos(hpos);
    44.                             COMPUTE_EYEDEPTH(o.projPos.z);
    45.                     //#endif
    47.                 }
    49.                 UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
    50.                 float _InvFade;
    52.                 void surf(Input IN, inout SurfaceOutput o) {
    53.                     //#ifdef SOFTPARTICLES_ON
    54.                             float sceneZ = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)));
    55.                             float partZ = IN.projPos.z;
    56.                             float fade = saturate(_InvFade * (sceneZ - partZ));
    57.                             IN.color.a *= fade;
    58.                     //#endif
    59.                     fixed4 c = tex2D(_MainTex, IN.uv_MainTex);
    60.                     o.Albedo = c.rgb + _Color;
    61.                     o.Alpha = c.a * IN.color.a;
    63.                 }
    64.                 ENDCG
    65.         }
    66.     }
    67. Fallback "Legacy Shaders/Transparent/Diffuse"
    68. }
    ferdinandsegers likes this.
  2. Namey5


    Jul 5, 2013
    For the most part you've done a good job of combining two different ways of writing shaders. However, there are definitely a few things to point out. Here is an 'improved' version of your shader;

    Code (CSharp):
    1. Shader "Custom/LitParticle"
    2. {
    3.     Properties
    4.     {
    5.         _Color ("Color", Color) = (1,1,1,1)
    6.         _MainTex ("Albedo (RGB)", 2D) = "white" {}
    7.         _InvFade ("Soft Particles Factor", Range (0.01, 1.0)) = .02
    8.     }
    9.     SubShader
    10.     {
    11.         Tags { "RenderType"="Transparent" "Queue"="Transparent" "IgnoreProjector" = "True" }
    12.         LOD 200
    14.         ColorMask RGB
    16.         CGPROGRAM
    17.         #pragma surface surf Lambert alpha:fade
    18.         #pragma multi_compile _ SOFTPARTICLES_ON
    19.         #pragma target 3.0
    21.         sampler2D _MainTex;
    22.         sampler2D_float _CameraDepthTexture;
    24.         struct Input
    25.         {
    26.             float2 uv_MainTex;
    27.             fixed4 color : COLOR;
    28.             #ifdef SOFTPARTICLES_ON
    29.             float4 screenPos;
    30.             #endif
    31.         };
    33.         half _InvFade;
    34.         fixed4 _Color;
    36.         void surf (Input IN, inout SurfaceOutput o)
    37.         {
    38.             #ifdef SOFTPARTICLES_ON
    39.        /= IN.screenPos.w;
    40.                 float depth = LinearEyeDepth (tex2D (_CameraDepthTexture, IN.screenPos.xy).r);
    41.                 float dist = LinearEyeDepth (IN.screenPos.z);
    42.                 float fade = saturate (_InvFade * (depth - dist));
    43.                 IN.color.a *= fade;
    44.             #endif
    46.             fixed4 c = tex2D (_MainTex, IN.uv_MainTex);
    47.             o.Albedo = c.rgb + _Color.rgb;
    48.             o.Alpha = c.a * IN.color.a;
    49.         }
    50.         ENDCG
    51.     }
    52.     FallBack "Legacy Shaders/Transparent/Diffuse"
    53. }
    Let's start with the SOFTPARTICLES_ON tag. I would imagine the reason this no longer worked in your example is because the compile directive that enabled that wasn't meant to work with surface shaders, so we have to specify it manually using this (like in Unity's own Standard particle shader);

    Code (CSharp):
    1. #pragma multi_compile _ SOFTPARTICLES_ON
    'multi_compile' takes two names, one for each shader variant that is created. In this case, we pass an underscore for the first one, which basically just means we only care about the second as we only change code based on that.

    Next is the vertex program. The syntax is a little different for surface shaders, and you only need to assign what isn't automatically created by Unity (uv_MainTex is automatically generated when the surface shader is compiled), although in this case we can skip it altogether because surface shaders have both vertex colour and screen-space position as auto-generated variables we can use. This will also solve you reaching the interpolator limit, which was caused by manually adding interpolator semantics to the members of the Input struct (which would then be added onto the surface shader's internal interpolators).

    Because we are no longer calculating view depth in the vertex shader, we will need to do it manually in the surface shader;

    Code (CSharp):
    1. float dist = LinearEyeDepth (IN.screenPos.z);
    The last thing I will note is that I'm surprised the lighting works properly at all. Diffuse shading is based on the surface normal, and I'm not 100% sure that would be defined for particles. Not to mention that particles are most often billboards, so even if they were the normals would always be facing the camera which could lead to inconsistent lighting.

    Hope that helps.
    RedRiverStudio likes this.
  3. RedRiverStudio


    Sep 8, 2014
    Holy cow Namey5, fantastic explanation. Thanks so much. The improved shader works great and alleviates those 2 fears.

    Yes, the lighting is sort of half baked in a sense that it can only pick up lights from sources that are in front of the particles. The particle emitter has a normal adjustment spinner that can move the normal around to pick up lights from other directions, which is incredibly useful. Attached is the effect in game:

    You can see there is direct blue lighting at the top, but also a soft ambient sunlight picked up on the upper surface, and it is consistent if you rotate around the model compared to the unity "Standard Particle" shader, where the lighting rapidly changes depending on the viewing angle and is really useless for any serious fog or smoke effect.

    Thanks again, hopefully somebody else can use this shader for their project.

    Attached Files:

  4. Namey5


    Jul 5, 2013
    I wasn't sure how you were lighting your particles, so I left this out of the original explanation, but if you wanted more consistent particle lighting you can light particles based on light attenuation only;

    Code (CSharp):
    1. #pragma surface surf Custom alpha:fade
    3. ...
    5. half4 LightingCustom (SurfaceOutput s, half3 lightDir, fixed atten)
    6. {
    7.     half4 c;
    8.     c.rgb = s.Albedo * _LightColor0.rgb * atten;
    9.     c.a = s.Alpha;
    10.     return c;
    11. }
    This is a lot more realistic to how particles would be lit, the only downside is that attenuation is constant for directional lights, so particles would always receive lighting from the sun no matter what. There is a way to get shadows from a directional light source for transparent objects that would help this, so let me know if anyone is interested and I can write a small post on that.
    RedRiverStudio likes this.
  5. RedRiverStudio


    Sep 8, 2014
    Happy to give it a shot, I have a Directional Sunlight and multiple spot and point lights that dynamically buzz about. I will try to integrate it and see if it works with my project.

    I have looked into it and multiple sources have said that shadows do not work with transparent shaders in Unity full stop, but my cloud layer creates shadows, so not sure who to believe. If you think you can get this shader to cast and receive shadows, I would be incredibly interested.

    Fun fact about my above shader: It turns completely invisible when light source shadows are disabled. I dont know why, but just turned on all the shadows in all scenes to combat it without too much ill effect.
  6. Namey5


    Jul 5, 2013
    You can think of shadows in two parts - shadow casting and shadow receiving. Any object can cast a shadow, hence why your clouds and most transparent objects would. As far as shadow receiving for transparent objects, the simple answer is that you can't. But simple isn't often reality, and whilst it is possible (albeit somewhat basic), it requires some more low level graphics work - hence why it's easier to just say that you can't.

    The reason shadow receiving is complicated in Unity specifically is because Unity doesn't actually sample shadows in the shader - instead they run a shadow receiving pass before the main shading pass and sample shadows based off depth information (which only exists for opaque objects), the results of which are stored in a texture for use during shading. While we're here, this is also why your soft-particles only work when shadows are enabled - by default in forward rendering the depth texture is only created when it is explicitly needed (like in shadow rendering). If you have effects that rely on it, you need to manually request the depth texture per-camera i.e.

    Code (CSharp):
    1. camera.depthTextureMode |= DepthTextureMode.Depth;
    A few years ago, Unity introduced command buffers - a way of injecting custom functionality into parts of the render loop that were previously inaccessible (including shadow rendering). Using these, we can actually get a copy of the base shadowmap for a light source and set it globally such that it can be accessed from any shader that needs it. Here is a script that you can attach to your main directional light source that will do such a thing;

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    5. //Command buffers are part of the Rendering namespace, so include that
    6. using UnityEngine.Rendering;
    8. //Force this to run in the editor and only when a light source is attached to the same object
    9. [ExecuteInEditMode, RequireComponent(typeof(Light))]
    10. public class ShadowmapCopy : MonoBehaviour
    11. {
    12.     CommandBuffer buffer;
    13.     new Light light;
    15.     private void OnEnable ()
    16.     {
    17.         //Get the light source, and return early if it isn't directional or doesn't cast shadows
    18.         light = GetComponent<Light>();
    19.         if (light.type != LightType.Directional || light.shadows == LightShadows.None)
    20.             return;
    22.         //Create a new command buffer and issue a command to set the raw shadowmap globally
    23.         buffer = new CommandBuffer () { name = "Shadowmap Copy" };
    24.         buffer.SetGlobalTexture ("_SunShadowmap", BuiltinRenderTextureType.CurrentActive);
    26.         //Add the command buffer just after shadowmap rendering
    27.         light.AddCommandBuffer (LightEvent.AfterShadowMap, buffer);
    28.     }
    30.     private void OnDisable ()
    31.     {
    32.         //Clean up resources when this object is finished
    33.         if (buffer != null)
    34.         {
    35.             //Also remove the command buffer from the light if it still exists to stop duplication
    36.             if (light)
    37.                 light.RemoveCommandBuffer (LightEvent.AfterShadowMap, buffer);
    39.             buffer.Clear ();
    40.             buffer = null;
    41.         }
    42.     }
    43. }
    Now we have global access to the raw shadowmap, but because we are bypassing the normal rendering process, we need to sample shadows ourselves in the shader.

    To start off with, we will need to include some libraries to help us work with the shadows. Chances are these are already included as part of the surface shader generation, but we'll include them anyway to make sure;

    Code (CSharp):
    1. #include "Lighting.cginc"
    2. #include "AutoLight.cginc"
    We will also need to declare the shadowmap such that we can use it, which can be done using a macro from AutoLight.cginc;

    Code (CSharp):
    1. ...
    2. sampler2D_float _CameraDepthTexture;
    3. UNITY_DECLARE_SHADOWMAP(_SunShadowmap);
    4. ...
    One more thing we will need to work with shadows is the world-space position of the current fragment. This is another auto-generated surface shader variable, so add that to the Input struct;

    Code (CSharp):
    1. ...
    2. struct Input
    3. {
    4.     float2 uv_MainTex;
    5.     fixed4 color : COLOR;
    6.     float3 worldPos;
    7. ...
    Now, we can get to sampling the shadowmap. The basic idea is to transform our world-space position into light-space and check if it is behind something using the shadowmap. In your surface shader;

    Code (CSharp):
    1. ...
    2. o.Alpha = c.a * IN.color.a;
    4. //Find the shadow-space position of the current fragment for every shadow cascade
    5. float3 shadowCoord0 = mul (unity_WorldToShadow[0], float4 (IN.worldPos, 1)).xyz;
    6. float3 shadowCoord1 = mul (unity_WorldToShadow[1], float4 (IN.worldPos, 1)).xyz;
    7. float3 shadowCoord2 = mul (unity_WorldToShadow[2], float4 (IN.worldPos, 1)).xyz;
    8. float3 shadowCoord3 = mul (unity_WorldToShadow[3], float4 (IN.worldPos, 1)).xyz;
    10. //These weights are used to figure out which shadow cascade is needed
    11. float cascadeDistance = distance (IN.worldPos,;
    12. float4 zNear = float4 (cascadeDistance >= _LightSplitsNear);
    13. float4 zFar = float4 (cascadeDistance < _LightSplitsFar);
    14. float4 weights = zNear * zFar;
    16. //Find the shadow coords for the current cascade
    17. float4 shadowCoord = float4 (shadowCoord0 * weights[0] + shadowCoord1 * weights[1] + shadowCoord2 * weights[2] + shadowCoord3 * weights[3], 1);
    19. //Sample the shadowmap and store the results in an unused output (in this case, specular)
    20. o.Specular = UNITY_SAMPLE_SHADOW (_SunShadowmap, shadowCoord);
    21. ...
    Finally, we can pass the shadows to the lighting function and attenuate the lighting based on that. However, we only want to attenuate the directional light, so we check if we are currently operating on the directional light by seeing if the w component of _WorldSpaceLightPos0 is equal to 0;

    Code (CSharp):
    1. ...
    2. half4 LightingCustom (SurfaceOutput s, half3 lightDir, fixed atten)
    3. {
    4.     //Multiply attenuation by the shadowmap, but only when we are working with the directional light.
    5.     //The '* 0.5' is optional, I just found that the directional light was too overpowering
    6.     atten *= (_WorldSpaceLightPos0.w == 0) ? s.Specular * 0.5 : 1.0;
    8.     half4 c;
    9. ...
    And that's about it. Keep in mind, this works on the assumption that you only have one main directional light. The shadows will also be unfiltered because filtering is done during that screen-space prepass we skipped. If you want softer shadows, you can jitter the sampling using noise/take multiple samples. You could also raymarch through the shadows if you are using this for particles to approximate scattering, but these are all decently expensive.

    Shadow casting is based off the shader's fallback, although semi-transparent shadow casting is a bit more difficult. The standard shader uses dithering to approximate semi-transparent shadows, although in this case you could probably use a cutout shader to get the base outline of the smoke.
    RedRiverStudio likes this.
  7. RedRiverStudio


    Sep 8, 2014
    This sounds great. I am eager to give this a try. Its nice that this is shader/object specific so you aren't rewriting the entire lighting structure of Unity. I am all about modular additions.

    Because they are view dependent particles, I wonder how they will handle the shadows, if they are in line with the shadow, they might be either stark dark or light, or if they are "backlit" by a shadow, will it still register at all? I will post the results!
    Namey5 likes this.
  8. RedRiverStudio


    Sep 8, 2014
    Ok I have managed to get the shadows to work and it does work as well as we expected, haha. All the elements are there, and it looks fabulous in still shots, but when you start to move and the particles shift angles, things get a bit sketchy. The big issue with my scene is my particles move apart when they hit the ground, so because there aren't enough particles, you get "ghosting" of shadows on individual particles which starts to break immersion a bit.

    This is probably as close to volumetric as you can get without post processing.

    I have a sun-shafts effect that I run when you get close to the machine, which highlight the smoke rather than shadowing it, so that should be good enough for the time being.

    I might experiment some more with what you have shown me to try to soften the effect. There's a lot of dust and smoke in my game, so adding shadows like this would really enhance it.

    Attached Files:

  9. Namey5


    Jul 5, 2013
    I kind of hinted to it at the end of the post, but one way of helping this would be to actually make it volumetric (i.e. through raymarching/offsetting the shadow sampling via noise). I'll put together an example tomorrow, but the basic idea is to accumulate shadow samples by taking small steps along the view ray. It will be more expensive (especially need to be careful with overdraw considering these are particles), so I'll also try just raw per-pixel sampling offset (which will be potentially noisy, but should fix directional banding and only needs one sample).
    RedRiverStudio likes this.
  10. bgolus


    Dec 7, 2012
    A common solution for this in AAA games is to reduce the resolution you’re sampling the shadow map at. Early on this came in the form of doing per vertex shadow samples and dynamically tessellation the particles, but that’s not really done anymore. Later shadows for particle systems were moved into collecting the shadows in 3D textures, which also allowed for easier soft self shadowing. More recently I’ve started to see shadows collected in a low resolution “lightmap” texture, like Doom, or into generalized 3D volume lighting representations, like Call of Duty and Destiny 2. Another common thing is to render particles at a much lower resolution so you can do higher quality (I.e. more expensive) rendering and then composite back into the full resolution scene rendering. That’s used in a lot of games.
    Namey5 and RedRiverStudio like this.
  11. Namey5


    Jul 5, 2013
    I've been experimenting with some things, but billboarded particles are unique to work with as the illusion breaks a lot of conventions that other techniques rely upon. The main difficulty comes from figuring out where to sample to properly represent the 3D volume that a particle is visually approximating. My idea for raymarching was to essentially treat each particle like a box - taking randomly distributed samples throughout the volume of the box but still relevant to the particle's surface. It does work, but I don't really have a proper scenario to test it to the extent of your needs. It would also be convenient to use this sampling method with the rest of the lighting path to avoid directional banding on things like spotlights, but surface shaders don't quite offer that level of control, so you would need to write a full vert/frag shader to take advantage of that.

    With that aside, here's some things you can try. I'll start with the cheaper one - using noise to break-up shadow banding by offsetting the sample coordinates per-pixel. For this there are a variety of noise types and patterns you could use, but I use a 3-channel blue noise texture as it is easiest on the eyes and great for temporal filtering;

    When importing this, make sure to disable compression, set the max-size to 64 and mark it as linear (there should be an sRGB checkbox near the alpha settings that you can disable). When declaring this in the shader properties, you should default it to grey because we will be remapping it into a range of [-1,1];

    Code (CSharp):
    1. _NoiseTex ("Noise", 2D) = "gray" {}
    In order to sample this at the correct resolution, we will also need the texture's dimensions (which can be accessed via the _TexelSize suffix);

    Code (CSharp):
    1. sampler2D _NoiseTex;
    2. float4 _NoiseTex_TexelSize;
    I also like to setup a noise sampling function to make it a bit easier to reuse;

    Code (CSharp):
    1. float3 rand3 (float2 uv)
    2. {
    3.     //This makes noise texel size = screen texel size
    4.     //Equates to screen resolution / noise resolution
    5.     float2 res = _ScreenParams.xy * _NoiseTex_TexelSize.xy;
    6.     //Gradient functions can cause problems and we only need the first mip anyway, so force that
    7.     return tex2Dlod (_NoiseTex, float4 (uv * res, 0, 0)).xyz;
    8. }
    We will also be needing scene depth and screen-position in more than one place now, so we can move some stuff out of the shader variant;

    Code (CSharp):
    1. struct Input
    2. {
    3.     float2 uv_MainTex;
    4.     fixed4 color : COLOR;
    5.     float3 worldPos;
    6.     float4 screenPos;
    7. };
    9. ...
    11. void surf (Input IN, inout SurfaceOutput o)
    12. {
    13. /= IN.screenPos.w;
    14.     float depth = LinearEyeDepth (tex2D (_CameraDepthTexture, IN.screenPos.xy).r);
    15.     float dist = LinearEyeDepth (IN.screenPos.z);
    17.     #ifdef SOFTPARTICLES_ON
    18.         float fade = saturate (_InvFade * (depth - dist));
    19.         IN.color.a *= fade;
    20.     #endif
    The first bit of offsetting we will do is to push the sampling position towards or away from the camera by a random amount within the bounds of how thick we want the particles to be

    Code (CSharp):
    1. //Sample the noise texture and remap to a range of [-1,1]
    2. float3 noise = rand3 (IN.screenPos.xy) * 2.0 - 1.0;
    4. //We can calculate the view vector by subtracting the camera's position from our world space position
    5. float3 viewDir = IN.worldPos -;
    6. //This normalises the view vector based on linear view depth (actually normalising it will throw things off)
    7. float3 viewDirN = viewDir / dist;
    9. //Offset the world position by a random amount between [-1,1] in the view direction
    10. float3 pos = IN.worldPos + viewDirN * noise.z * _Thickness;
    12. //Make sure to use the new position for sampling
    13. float3 shadowCoord0 = mul (unity_WorldToShadow[0], float4 (pos, 1)).xyz;
    14. float3 shadowCoord1 = mul (unity_WorldToShadow[1], float4 (pos, 1)).xyz;
    15. float3 shadowCoord2 = mul (unity_WorldToShadow[2], float4 (pos, 1)).xyz;
    16. float3 shadowCoord3 = mul (unity_WorldToShadow[3], float4 (pos, 1)).xyz;
    18. //Here too
    19. float cascadeDistance = distance (pos,;
    Next, we will also apply a bit of jittering to the shadow space xy coordinates to soften shadow edges;

    Code (CSharp):
    1. float4 shadowCoord = float4 (shadowCoord0 * weights[0] + shadowCoord1 * weights[1] + shadowCoord2 * weights[2] + shadowCoord3 * weights[3], 1);
    3. //We want this to only be a few pixels, so 0.002 scales assuming 512x512 baseline shadow resolution
    4. shadowCoord.xy += noise.xy * 0.002 * _ShadowSoftness;
    6. //Sample the shadowmap and store the results in an unused output (in this case, specular)
    7. o.Specular = UNITY_SAMPLE_SHADOW (_SunShadowmap, shadowCoord);
    The material properties used in the above are just scalars to control particle thickness (in metres) and shadow softness, i.e.

    Code (CSharp):
    1. _ShadowSoftness ("Shadow Softness", Float) = 1.0
    2. _Thickness ("Particle Thickness", Float) = 1.0
    This does a decent job at breaking up banding and approximating a volume whilst still only using a single shadow sample, however it can get somewhat noisy (noise increases with particle thickness). I personally don't mind this, but it can be distracting. Ordinarily I use TAA, and in combination with using new noise samples every frame you actually end up with a filtered image (as the samples accumulate over time), but that isn't always a viable solution, hence where part two comes in - raymarching.

    Funnily enough, what we just did is essentially exactly the same as the raymarching implementation, only we used a single sample rather than multiple. When we use multiple samples along the ray, we get a closer approximation of the volume - which filters out the noise. To start off with, let's define a sample count for how many steps we want to take along the ray. You could use a material value to adjust samples on the fly, but I use a macro to define this as a constant for performance reasons;

    Code (CSharp):
    1. #define SAMPLES 4u
    2. void surf (Input IN, inout SurfaceOutput o)
    3. {
    From here on out it's basically the same as the previous example, except this time we cast a ray through the volume and take multiple samples along it;

    Code (CSharp):
    1. ...
    3. //In this case we will be handling dithering a little differently,
    4. //so we only want to remap the xy channels
    5. float3 noise = rand3 (IN.screenPos.xy);
    6. noise.xy = noise.xy * 2.0 - 1.0;
    8. float3 viewDir = IN.worldPos -;
    9. float3 viewDirN = viewDir / dist;
    11. //This makes sure the ray doesn't go through objects, but still uses all samples
    12. float rayLength = min (depth - dist, _Thickness);
    14. //Ray start point is at a constant thickness towards the camera
    15. float3 startPos = IN.worldPos + viewDirN * -_Thickness;
    16. //Ray end point is at the particle's max thickness or closest scene intersection
    17. float3 endPos = IN.worldPos + viewDirN * rayLength;
    19. //We don't want to have all these matrix multiplications in the marching loop, so
    20. //compute the start and end points here and interpolate between them in the loop
    21. float3 shadowCoordStart0 = mul (unity_WorldToShadow[0], float4 (startPos, 1)).xyz;
    22. float3 shadowCoordStart1 = mul (unity_WorldToShadow[1], float4 (startPos, 1)).xyz;
    23. float3 shadowCoordStart2 = mul (unity_WorldToShadow[2], float4 (startPos, 1)).xyz;
    24. float3 shadowCoordStart3 = mul (unity_WorldToShadow[3], float4 (startPos, 1)).xyz;
    25. float3 shadowCoordEnd0 = mul (unity_WorldToShadow[0], float4 (endPos, 1)).xyz;
    26. float3 shadowCoordEnd1 = mul (unity_WorldToShadow[1], float4 (endPos, 1)).xyz;
    27. float3 shadowCoordEnd2 = mul (unity_WorldToShadow[2], float4 (endPos, 1)).xyz;
    28. float3 shadowCoordEnd3 = mul (unity_WorldToShadow[3], float4 (endPos, 1)).xyz;
    30. //Nice to have the inverse sample count to multiply by
    31. const float invSamples = 1.0 / (float)SAMPLES;
    33. //Accumulate shadow samples into this
    34. float shadowAtten = 0;
    36. //I like to define all variables outside the loop to force myself to only keep necessary calculations inside
    37. float3 pos = startPos;
    38. float cascadeDistance, t;
    39. float4 zNear, zFar, weights, shadowCoord;
    40. float3 shadowCoord0, shadowCoord1, shadowCoord2, shadowCoord3;
    41. //Loop through the ray
    42. for (uint x = 0; x < SAMPLES; x++)
    43. {
    44.     //This is how far we have travelled along the ray (from [0,1]). We also offset by noise in here rather than before
    45.     t = (x + noise.z) * invSamples;
    47.     //Interpolate between start and end points for all positions
    48.     pos = lerp (startPos, endPos, t);
    49.     shadowCoord0 = lerp (shadowCoordStart0, shadowCoordEnd0, t);
    50.     shadowCoord1 = lerp (shadowCoordStart1, shadowCoordEnd1, t);
    51.     shadowCoord2 = lerp (shadowCoordStart2, shadowCoordEnd2, t);
    52.     shadowCoord3 = lerp (shadowCoordStart3, shadowCoordEnd3, t);
    54.     cascadeDistance = distance (pos,;
    55.     zNear = float4 (cascadeDistance >= _LightSplitsNear);
    56.     zFar = float4 (cascadeDistance < _LightSplitsFar);
    57.     weights = zNear * zFar;
    59.     shadowCoord = float4 (shadowCoord0 * weights[0] + shadowCoord1 * weights[1] + shadowCoord2 * weights[2] + shadowCoord3 * weights[3], 1);
    61.     shadowCoord.xy += noise.xy * 0.002 * _ShadowSoftness;
    63.     //Accumulate shadow samples
    64.     shadowAtten += UNITY_SAMPLE_SHADOW (_SunShadowmap, shadowCoord);
    65. }
    67. //Here we normalise the samples back to the correct range.
    68. //The last part scales the intensity back if the ray was cut short so that
    69. //each sample contributes the same amount over the same distance
    70. o.Specular = shadowAtten * invSamples * (rayLength / _Thickness * 0.5 + 0.5);
    Congratulations, you've now written a basic volumetric renderer. I don't really have any way of testing these to your needs, so let me know how it goes or if you have any problems.

    @bgolus also had some good suggestions on different methods for handling volumetric/particle lighting and rendering. My other train of thought for this lighting system was similar to the vertex-lit approach - lighting/shadowing per particle, although with the size of your particles that could still be quite jarring. In terms of lighting using a 3D texture to store the volume, that's actually how I implemented my volumetric fog - however I can say from experience it is a real pain to get working in the built-in renderer as there is no real way to get access to the directional shadow matrices. They are automatically available in our case (which is lucky), but for the 3D texture approach you need to pass everything by hand to the compute shader. The only way I could figure out was to blit the matrix values to a 4x4 texture and read them back on the CPU, then pass those to the shader (I looked around and even Aura does the same thing). Either way, I think it's a little outside the scope for me to explain in a forum post. Rendering particles to a lower-res render target and compositing is also a viable option, but it would require handling particle rendering manually (probably through Graphics.DrawMeshInstanced).
    Last edited: Apr 20, 2020
  12. RedRiverStudio


    Sep 8, 2014
    Your volumetric fog looks fantastic. Happy to give this a shot. As we start getting into a more sophisticated realm, I have to be mindful of performance as I am a VR game first and foremost. Will try some performance tests along with the new shader to see how it handles.

    Somewhere between jittering and raymarching, I had the idea of swapping the particle billboards with 3-plane facing meshes. This would obviously triple the poly, but provide a bit more filler material to collect shadows and represent more accurate depth. I can also add a horizontal plane to provide a better cast shadow profile as the particles are always facing. Its a quick cheap fix that might work well enough, I will try it on top of what you have provided.

    Theres nothing special about my 3d model and particle setup, but I am happy to share it with you if you want.
  13. bgolus


    Dec 7, 2012
    Yeah ... don’t do that. It looks really bad in VR. You can actually see the three planes, and because real time rendering doesn’t sort per pixel transparency there’ll be a depth disparity depending on what order the planes happen to render in. Some VR games have used three planes, but specifically only on additive particles, like explosion fire, etc. It’s also really bad for fillrate which all VR, even high end desktop PC based VR, struggles with. The poly / vertex count increase is actually irrelevantly small, even for mobile.

    What I do for the PC version of Falcon Age is adjust the shadow sample positions by a height map so that they’re something closer to a spherical shape instead of a plane. It’s not perfect, but it gives a better sense of volume compared to using the particle billboard’s position alone. It’s also a very stylized game style.

    @Namey5 ’s example of using noise is a good one though. Especially for PC. With the 90Hz or higher frames rates of a lot of headsets the noise will get “blurred” purely by the frame rate. You might be able to get away with using two or three randomly offset 3D depth positions to do the shadow sampling at (using an RGB blue noise texture that is changed every frame) along with temporal blurring from the frame rate will get you something perceptibly fairly soft looking for relatively cheap.
    Last edited: Apr 20, 2020
    Namey5 and RedRiverStudio like this.