Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

HDRP will die due to miners?

Discussion in 'High Definition Render Pipeline' started by BattleAngelAlita, Feb 19, 2021.

?

HDRP will die due to miners

  1. Yes

    7 vote(s)
    12.3%
  2. No

    50 vote(s)
    87.7%
  1. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    It seems not affordable dedicated GPUs in near 10 years. HDRP known to be huge performance hungry, especially with ray-tracing stuff.
     
  2. Kirsche

    Kirsche

    Joined:
    Apr 14, 2015
    Posts:
    121
    No. Crypto will crash and burn and overflood the market with used high-end GPUs as a result of it.
     
  3. rmon222

    rmon222

    Joined:
    Oct 24, 2018
    Posts:
    77
  4. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    Still no affordable hi-end GPUs, old low-end stocks became 500+, used low-end became 400+
     
  5. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
  6. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    Still no GPUs.
     
  7. M4dR0b

    M4dR0b

    Joined:
    Feb 1, 2019
    Posts:
    108
    And now the SSDs are on the rise too, next will be RAM... We should do something as a dev community with optical drivers and floppy readers to divert the market on those products and free the GPUs!
    #DevUnite lol
     
  8. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    Still no GPUs. Why HDRP not die yet?
     
  9. HIBIKI_entertainment

    HIBIKI_entertainment

    Joined:
    Dec 4, 2018
    Posts:
    594
    In general not really.
    If your target production is pushing HDRP limits the. You likely already have the hardware in some state or a other.

    If you're a self employed solo Dev.
    Your production target and Input to that would be highly tied on time and resources, which of course should be guiding your scope and end target.

    Of course production systems tend to have to be stronger than its playable counterpart in terms of hardware.

    And yes, this does open gaps.

    But there is still hardware to access and suppliers that should be a key partners for you business to secure.

    It's highly subject to your projects scope though.

    You can still build a lower scope solo Dev game in HDRP absolutely fine if you wanted to, but then you may also have better options in URP because of that.

    But in terms of none game based projects in HDRP and a pipeline that supports a lot of other creative industry pipelines. It's by no means going to vanish just because RTX cards are slim pickings. And chip manufacturers are tight on silicon.
    Those kinds of effects wouldn't only effect HDRP. That's very much a global catastrophe.
     
  10. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
    They must save HDRP only for the interior demos. HDRP for realtime games should be removed. No one want 10 fps in games.

    upload_2021-10-25_13-16-46.png
    upload_2021-10-25_13-16-55.png
    upload_2021-10-25_13-17-4.png
    upload_2021-10-25_13-17-31.png
     
    Ruchir likes this.
  11. Leniaal

    Leniaal

    Joined:
    Nov 7, 2012
    Posts:
    119
    What an intelligent discussion.
     
    Gokcan likes this.
  12. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
    intelligent discussion? Rly?
    On my pc with gtx 1070 + ryzen 3600 + 32 ram I see 30 fps with simple sphere in full hd.
    upload_2021-10-25_18-3-23.png

    with 10x10 pixels I have 40 fps(lol)
    upload_2021-10-25_18-3-12.png

    and 1 fps with 8k.
    upload_2021-10-25_18-3-39.png

    Nice performance. HDRP is not for average gaming video cards, only for high-end GPU, like a 3080/3090 which typical users can't buy.
    Also some people want HDRP to work in VR or on iPads. People do not understand why HDRP needs and use in 90% of cases where it is absolutely not needed.
     
    Last edited: Oct 25, 2021
  13. ccsander

    ccsander

    Joined:
    May 17, 2013
    Posts:
    44
    HDRP is highly scalable. It does have a higher base cost than URP, so use URP if you don't need fancy lighting and want to maximize performance. I wouldn't use the example scene as a benchmark. You can easily get FPS in the 100s using HDRP and have it look significantly better than URP. There are tons of options that can be turned off or scaled back to whatever the target game needs. Honestly if it wasn't for HDRP I would be looking at Unreal right now. Unity should not abandon HDRP. This is the future.

    Most GPU mining is driven by Ethereum miners, which will end sometime next year. I mine ETH on the side when my GPUs are otherwise not doing anything, and every single GPU I have has paid for itself this way. There are not any other top cryptocurrencies which demand GPUs, so next year miners buying GPUs should go down significantly.

    So the combination of these two things leads to my vote which is a vehement NO!
     
    Auraion likes this.
  14. Deleted User

    Deleted User

    Guest

    I get 40 - 50+ fps on gtx 1650ti and ryzen 3500h 8gb ram out of which 2gb is used by vega 8... I had also tested it on intel HD 400 celeron dual core which had fps around 10-15 fps... Why u are getting such a low fps.... maybe it's a bug
     
  15. pierred_unity

    pierred_unity

    Unity Technologies

    Joined:
    May 25, 2018
    Posts:
    433
    Hey, this looks very wrong. What are your Unity and HDRP versions? Your CPU thread is ridiculously high at 1080p, I'm surprised it doesn't raise your suspicion. Are you using a recent Alpha version of Unity which indeed has some CPU performance issues in Editor? If not, you might want to report this as a bug, we'll need to investigate this.

    Even in your 10x10 image, you are CPU-limited for some reason: your CPU thread is abnormally high at 23ms, so obviously you can't go any higher than 43 fps (1000/23 = 43). It doesn't matter if you have a 1070 TI or an RTX 3090 if you're CPU limited.

    For an honest comparison, I grabbed the 2020 LTS version of Unity, and tested with my RTX 2070 and Ryzen 3700X: I reach 160 fps in the template easily at 1080p (~6ms CPU thread). It could certainly go higher, but HDRP's has a certain CPU overhead.

    upload_2021-10-25_15-58-42.png


    Again, in your 8K image, you're clearly CPU-limited btw (607 ms for the CPU thread?!), not GPU-limited. For instance, my CPU seems nearly 10x faster (a Ryzen 3070 is not radically more powerful than your 3600).
    upload_2021-10-25_16-2-1.png

    And by switching to the low-quality mode (without Volumetric Fog), I'm getting a more reasonable result at 8K. I don't really get the point of your 8K example anyway, most game productions out there don't even bother doing native 4K, let alone 8K, because they don't scale well for high resolutions. Instead, they use DLSS or other forms of upscaling (FSR, temporal), which HDRP supports. The fact I'm at +30 fps at full native 8K on my very reasonable machine (it's quite old by today's standards), is actually decent.
    upload_2021-10-25_16-6-42.png


    So it's clearly a bug with the Unity version you use, or an issue on your machine, or a mix of both. You should be North of 90 fps with a 1070 and a Rzyen 3600 in Full HD in "correct" conditions. Even a Mac Mini M1 can handle the HDRP scene template at nearly 60 fps.


    On top of this, profiling in the editor is rarely a good idea, as the editor (its UI for instance) will have a performance cost. At least try to play in "maximized" mode. Or better, build a player instead and profile it properly, the framerate will be higher regardless. I'd be curious if you can reproduce my results with a recent 2020 LTS version so that there can be a fair comparison, and so that you don't base your opinion on very odd numbers. :)
     
    Kirsche and Gokcan like this.
  16. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    I dare you to create a new scene, add a sphere there and measure. HDRP template is quite heavy for multiple reasons. Also that samples default quality settings enable a lot of features avg Unity user wouldn't normally use even, it's not a good scene for benchmarking HDRP, it's better suited for showcasing HDRP functionality IMHO.
     
  17. Gokcan

    Gokcan

    Joined:
    Aug 15, 2013
    Posts:
    289
    @kripto289
    Very nice answer from @pierred_unity to you. If you see such numbers on cpu and if you dont suspect about these or if you dont report it as a bug, it is your problem actually... What do you want from them? They should track your numbers instead of you?
     
  18. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Well, that HDRP template IS more heavy to run that you'd expect from such visuals.. but it's not prime example of HDRP perf nor I think it's ever been designed as such.
     
    nehvaleem likes this.
  19. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
    It's GPU bottleneck and cpu just wait it.
    upload_2021-10-25_22-39-16.png

    I tested it on hdrp12 and most likely it's this bug.

    >For an honest comparison, I grabbed the 2020 LTS version of Unity, and tested with my RTX 2070 and Ryzen 3700X: I reach 160 fps in the template easily at 1080p (~6ms CPU thread). It could certainly go higher, but HDRP's has a certain CPU overhead.

    In the hdrp10 I have ~85 fps with medium settings and ~65 fps with high settings.
    upload_2021-10-25_22-48-5.png upload_2021-10-25_22-49-23.png


    Anyway, ~60-90 fps in the empty scene with one sphere it's not perfect.

    As I said before, it's "wait for gpu" callback.

    It's just the example (as the 10x10 pixels) that performance non-relative to resolution, as in the standard rendering.
    And even simple scene have big performance overhead for light culling, light tiling, 100500 prepass/postpass (like a distortion pass, etc). I understand that HDRP rendering is heavy because targeted on visual quality. But 90% of people doesn't know about it, because unity uses incorrect advertisement "HDRP"

    Where is the minimal specification? It's the reason that some people want HDRP in mobiles/vr/ipads.
    Nowhere is there no comparison of the rendering speed of the standard vs URP vs HDRP.
    Most developers (like typical gamers) don't understand that + 10% of the visual quality may require -50% performance and they prefered HDRP.
    For example, it's start page HDRP poster. Rly?

    Why this game required HDRP rendering?
    Or this?


    And this games will have minimal specifications like a top AAA games, because HDRP rendering required it.
    It is not right.
     
  20. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    It's not but you are also definitely not looking at empty scene with one sphere either, regardless how you want to twist it.

    Here's a screenshot of HDRP running my decade old i3 with Nvidia GT640 (that slower DDR3 version):


    Same thing on my RTX2070S:
     
  21. Matjio

    Matjio

    Unity Technologies

    Joined:
    Dec 1, 2014
    Posts:
    108
    Hi @kripto289 ,

    We just published a blog post to deep dive into HDRP settings and impact on performance with comparisons with the Built-in render pipeline:
    https://blog.unity.com/technology/get-acquainted-with-hdrp-settings-for-enhanced-performance
    Hope that it is helpful.

    Re. The perf issue could you share which version of Unity you were using. There was indeed a regression introduced but which has been fixed. Otherwise we have seen significant performance improvement in most projects in 2021.2 compared to 2020LTS.

    For the « empty scene with a sphere » , it is far from being empty: volumetric fog with scattering, multiple decals with decal layers, physically based sky, procedural clouds, multiple post processing, soft shadows, multiple spot lights with shadows and IES light profiles, multiple lens flares with real-time occlusion, plants with sub surface scattering, VFX graph for butterflies and dust, light maps, light probes, multiple reflection probes, auto exposure, temporal anti-aliasing, and on higher quality even more features activated… Indeed, as mentioned before if it made to showcase a wide range of features working together.

    I can assure you that HDRP is highly optimized for the feature set available off the shelf, and can now be used in productions targeting a wide range of PC (incl. Apple M1), consoles and PC based VR (except Switch). Which does not mean that there is not room for improvement and we will continue optimizing as much as we can! :)

    For our landing page, thanks for your feedback. We have planned to update it with new content and will make sure we are more clear about platform support (I personally think though that Harold Halibut -from which is taken the second snapshot you shared above- looks fantastic and is a great showcase of HDRP:
    ).

     
    Lex4art, M4dR0b, Leniaal and 3 others like this.
  22. M4dR0b

    M4dR0b

    Joined:
    Feb 1, 2019
    Posts:
    108
    OT, this does absolutely look wonderful. Great artistic vision!
     
  23. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Great talk about Harold Halibut:

    if you watch that through, you'll notice that they pretty much use most of HDRP's functionality there, including volumetrics and virtual texturing.
     
    M4dR0b likes this.
  24. koirat

    koirat

    Joined:
    Jul 7, 2012
    Posts:
    2,068
  25. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
    I know how the HDRP rendering is happening. I meant that there are no gaming features, like a physics, objects, environment, characters, scripts, ui in the scene, etc.
    I said about it before

    HDRP uses complex shader calculations everywhere in favor of visual quality, instead of the rendering speed.
    Attempt to optimize HDRP rendering is how to try to speed up a bulldozer vs sports cars.
    For example here is the standard fog calculation in the shader:

    Code (CSharp):
    1.  float unityFogFactor = unity_FogParams.x * (coord); unityFogFactor = exp2(-unityFogFactor*unityFogFactor)
    2. col.rgb = lerp((fogCol).rgb, (col).rgb, saturate(unityFogFactor))
    and here is HDRP version of the fog calculation:

    Code (CSharp):
    1. float3 ExpLerp(float3 A, float3 B, float t, float x, float y)
    2. {
    3.     // Remap t: (exp(10 k t) - 1) / (exp(10 k) - 1) = exp(x t) y - y.
    4.     t = exp(x * t) * y - y;
    5.     // Perform linear interpolation using the new value of t.
    6.     return lerp(A, B, t);
    7. }
    8.  
    9. float3 GetFogColor(float3 V, float fragDist)
    10. {
    11.     float3 color = _FogColor.rgb;
    12.  
    13.     if (_FogColorMode == FOGCOLORMODE_SKY_COLOR)
    14.     {
    15.         // Based on Uncharted 4 "Mip Sky Fog" trick: http://advances.realtimerendering.com/other/2016/naughty_dog/NaughtyDog_TechArt_Final.pdf
    16.         float mipLevel = (1.0 - _MipFogMaxMip * saturate((fragDist - _MipFogNear) / (_MipFogFar - _MipFogNear))) * (ENVCONSTANTS_CONVOLUTION_MIP_COUNT - 1);
    17.         // For the atmospheric scattering, we use the GGX convoluted version of the cubemap. That matches the of the idnex 0
    18.         color *= SampleSkyTexture(-V, mipLevel, 0).rgb; // '_FogColor' is the tint
    19.     }
    20.  
    21.     return color;
    22. }
    23.  
    24. // All units in meters!
    25. // Assumes that there is NO sky occlusion along the ray AT ALL.
    26. // We evaluate atmospheric scattering for the sky and other celestial bodies
    27. // during the sky pass. The opaque atmospheric scattering pass applies atmospheric
    28. // scattering to all other opaque geometry.
    29. void EvaluatePbrAtmosphere(float3 worldSpaceCameraPos, float3 V, float distAlongRay, bool renderSunDisk,
    30.                            out float3 skyColor, out float3 skyOpacity)
    31. {
    32.     skyColor = skyOpacity = 0;
    33.  
    34.     const float  R = _PlanetaryRadius;
    35.     const float2 n = float2(_AirDensityFalloff, _AerosolDensityFalloff);
    36.     const float2 H = float2(_AirScaleHeight,    _AerosolScaleHeight);
    37.  
    38.     // TODO: Not sure it's possible to precompute cam rel pos since variables
    39.     // in the two constant buffers may be set at a different frequency?
    40.     const float3 O     = worldSpaceCameraPos - _PlanetCenterPosition.xyz;
    41.     const float  tFrag = abs(distAlongRay); // Clear the "hit ground" flag
    42.  
    43.     float3 N; float r; // These params correspond to the entry point
    44.     float  tEntry = IntersectAtmosphere(O, V, N, r).x;
    45.     float  tExit  = IntersectAtmosphere(O, V, N, r).y;
    46.  
    47.     float NdotV  = dot(N, V);
    48.     float cosChi = -NdotV;
    49.     float cosHor = ComputeCosineOfHorizonAngle(r);
    50.  
    51.     bool rayIntersectsAtmosphere = (tEntry >= 0);
    52.     bool lookAboveHorizon        = (cosChi >= cosHor);
    53.  
    54.     // Our precomputed tables only contain information above ground.
    55.     // Being on or below ground still counts as outside.
    56.     // If it's outside the atmosphere, we only need one texture look-up.
    57.     bool hitGround = distAlongRay < 0;
    58.     bool rayEndsInsideAtmosphere = (tFrag < tExit) && !hitGround;
    59.  
    60.     if (rayIntersectsAtmosphere)
    61.     {
    62.         float2 Z = R * n;
    63.         float r0 = r, cosChi0 = cosChi;
    64.  
    65.         float r1 = 0, cosChi1 = 0;
    66.         float3 N1 = 0;
    67.  
    68.         if (tFrag < tExit)
    69.         {
    70.             float3 P1 = O + tFrag * -V;
    71.  
    72.             r1      = length(P1);
    73.             N1      = P1 * rcp(r1);
    74.             cosChi1 = dot(P1, -V) * rcp(r1);
    75.  
    76.             // Potential swap.
    77.             cosChi0 = (cosChi1 >= 0) ? cosChi0 : -cosChi0;
    78.         }
    79.  
    80.         float2 ch0, ch1 = 0;
    81.  
    82.         {
    83.             float2 z0 = r0 * n;
    84.  
    85.             ch0.x = RescaledChapmanFunction(z0.x, Z.x, cosChi0);
    86.             ch0.y = RescaledChapmanFunction(z0.y, Z.y, cosChi0);
    87.         }
    88.  
    89.         if (tFrag < tExit)
    90.         {
    91.             float2 z1 = r1 * n;
    92.  
    93.             ch1.x = ChapmanUpperApprox(z1.x, abs(cosChi1)) * exp(Z.x - z1.x);
    94.             ch1.y = ChapmanUpperApprox(z1.y, abs(cosChi1)) * exp(Z.y - z1.y);
    95.         }
    96.  
    97.         // We may have swapped X and Y.
    98.         float2 ch = abs(ch0 - ch1);
    99.  
    100.         float3 optDepth = ch.x * H.x * _AirSeaLevelExtinction.xyz
    101.                         + ch.y * H.y * _AerosolSeaLevelExtinction;
    102.  
    103.         skyOpacity = 1 - TransmittanceFromOpticalDepth(optDepth); // from 'tEntry' to 'tFrag'
    104.  
    105.  
    106.         for (uint i = 0; i < _DirectionalLightCount; i++)
    107.         {
    108.             DirectionalLightData light = _DirectionalLightDatas[i];
    109.  
    110.             // Use scalar or integer cores (more efficient).
    111.             bool interactsWithSky = asint(light.distanceFromCamera) >= 0;
    112.  
    113.             if (!interactsWithSky) continue;
    114.  
    115.             float3 L = -light.forward.xyz;
    116.  
    117.             // The sun disk hack causes some issues when applied to nearby geometry, so don't do that.
    118.             if (renderSunDisk && asint(light.angularDiameter) != 0 && light.distanceFromCamera <= tFrag)
    119.             {
    120.                 float c = dot(L, -V);
    121.  
    122.                 if (-0.99999 < c && c < 0.99999)
    123.                 {
    124.                     float alpha = 0.5 * light.angularDiameter;
    125.                     float beta  = acos(c);
    126.                     float gamma = min(alpha, beta);
    127.  
    128.                     // Make sure that if (beta = Pi), no rotation is performed.
    129.                     gamma *= (PI - beta) * rcp(PI - gamma);
    130.  
    131.                     // Perform a shortest arc rotation.
    132.                     float3   A = normalize(cross(L, -V));
    133.                     float3x3 R = RotationFromAxisAngle(A, sin(gamma), cos(gamma));
    134.  
    135.                     // Rotate the light direction.
    136.                     L = mul(R, L);
    137.                 }
    138.             }
    139.  
    140.             // TODO: solve in spherical coords?
    141.             float height = r - R;
    142.             float  NdotL = dot(N, L);
    143.             float3 projL = L - N * NdotL;
    144.             float3 projV = V - N * NdotV;
    145.             float  phiL  = acos(clamp(dot(projL, projV) * rsqrt(max(dot(projL, projL) * dot(projV, projV), FLT_EPS)), -1, 1));
    146.  
    147.             TexCoord4D tc = ConvertPositionAndOrientationToTexCoords(height, NdotV, NdotL, phiL);
    148.  
    149.             float3 radiance = 0; // from 'tEntry' to 'tExit'
    150.  
    151.             // Single scattering does not contain the phase function.
    152.             float LdotV = dot(L, V);
    153.  
    154.             // Air.
    155.             radiance += lerp(SAMPLE_TEXTURE3D_LOD(_AirSingleScatteringTexture,     s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w0), 0).rgb,
    156.                              SAMPLE_TEXTURE3D_LOD(_AirSingleScatteringTexture,     s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w1), 0).rgb,
    157.                              tc.a) * AirPhase(LdotV);
    158.  
    159.             // Aerosols.
    160.             // TODO: since aerosols are in a separate texture,
    161.             // they could use a different max height value for improved precision.
    162.             radiance += lerp(SAMPLE_TEXTURE3D_LOD(_AerosolSingleScatteringTexture, s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w0), 0).rgb,
    163.                              SAMPLE_TEXTURE3D_LOD(_AerosolSingleScatteringTexture, s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w1), 0).rgb,
    164.                              tc.a) * AerosolPhase(LdotV);
    165.  
    166.             // MS.
    167.             radiance += lerp(SAMPLE_TEXTURE3D_LOD(_MultipleScatteringTexture,      s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w0), 0).rgb,
    168.                              SAMPLE_TEXTURE3D_LOD(_MultipleScatteringTexture,      s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w1), 0).rgb,
    169.                              tc.a);
    170.  
    171.             if (rayEndsInsideAtmosphere)
    172.             {
    173.                 float3 radiance1 = 0; // from 'tFrag' to 'tExit'
    174.  
    175.                 // TODO: solve in spherical coords?
    176.                 float height1 = r1 - R;
    177.                 float  NdotV1 = -cosChi1;
    178.                 float  NdotL1 = dot(N1, L);
    179.                 float3 projL1 = L - N1 * NdotL1;
    180.                 float3 projV1 = V - N1 * NdotV1;
    181.                 float  phiL1  = acos(clamp(dot(projL1, projV1) * rsqrt(max(dot(projL1, projL1) * dot(projV1, projV1), FLT_EPS)), -1, 1));
    182.  
    183.                 tc = ConvertPositionAndOrientationToTexCoords(height1, NdotV1, NdotL1, phiL1);
    184.  
    185.                 // Single scattering does not contain the phase function.
    186.  
    187.                 // Air.
    188.                 radiance1 += lerp(SAMPLE_TEXTURE3D_LOD(_AirSingleScatteringTexture,     s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w0), 0).rgb,
    189.                                   SAMPLE_TEXTURE3D_LOD(_AirSingleScatteringTexture,     s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w1), 0).rgb,
    190.                                   tc.a) * AirPhase(LdotV);
    191.  
    192.                 // Aerosols.
    193.                 // TODO: since aerosols are in a separate texture,
    194.                 // they could use a different max height value for improved precision.
    195.                 radiance1 += lerp(SAMPLE_TEXTURE3D_LOD(_AerosolSingleScatteringTexture, s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w0), 0).rgb,
    196.                                   SAMPLE_TEXTURE3D_LOD(_AerosolSingleScatteringTexture, s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w1), 0).rgb,
    197.                                   tc.a) * AerosolPhase(LdotV);
    198.  
    199.                 // MS.
    200.                 radiance1 += lerp(SAMPLE_TEXTURE3D_LOD(_MultipleScatteringTexture,      s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w0), 0).rgb,
    201.                                   SAMPLE_TEXTURE3D_LOD(_MultipleScatteringTexture,      s_linear_clamp_sampler, float3(tc.u, tc.v, tc.w1), 0).rgb,
    202.                                   tc.a);
    203.  
    204.                 // L(tEntry, tFrag) = L(tEntry, tExit) - T(tEntry, tFrag) * L(tFrag, tExit)
    205.                 radiance = max(0, radiance - (1 - skyOpacity) * radiance1);
    206.             }
    207.  
    208.             radiance *= light.color.rgb; // Globally scale the intensity
    209.  
    210.             skyColor += radiance;
    211.         }
    212.  
    213.         skyColor   = Desaturate(skyColor,   _ColorSaturation);
    214.         skyOpacity = Desaturate(skyOpacity, _AlphaSaturation) * _AlphaMultiplier;
    215.  
    216.         float horAngle = acos(cosHor);
    217.         float chiAngle = acos(cosChi);
    218.  
    219.         // [start, end] -> [0, 1] : (x - start) / (end - start) = x * rcpLength - (start * rcpLength)
    220.         // TEMPLATE_3_REAL(Remap01, x, rcpLength, startTimesRcpLength, return saturate(x * rcpLength - startTimesRcpLength))
    221.         float start    = horAngle;
    222.         float end      = 0;
    223.         float rcpLen   = rcp(end - start);
    224.         float nrmAngle = Remap01(chiAngle, rcpLen, start * rcpLen);
    225.         // float angle = saturate((0.5 * PI) - acos(cosChi) * rcp(0.5 * PI));
    226.  
    227.         skyColor *= ExpLerp(_HorizonTint.rgb, _ZenithTint.rgb, nrmAngle, _HorizonZenithShiftPower, _HorizonZenithShiftScale);
    228.     }
    229. }
    230.  
    231. float3 GetViewForwardDir1(float4x4 viewMatrix)
    232. {
    233.     return -viewMatrix[2].xyz;
    234. }
    235.  
    236. void EvaluateAtmosphericScattering(PositionInputs posInput, float3 V, out float3 color, out float3 opacity)
    237. {
    238.     color = opacity = 0;
    239.  
    240. #ifdef DEBUG_DISPLAY
    241.     // Don't sample atmospheric scattering when lighting debug more are enabled so fog is not visible
    242.     if (_DebugLightingMode >= DEBUGLIGHTINGMODE_DIFFUSE_LIGHTING && _DebugLightingMode <= DEBUGLIGHTINGMODE_EMISSIVE_LIGHTING)
    243.         return;
    244.  
    245.     if (_DebugShadowMapMode == SHADOWMAPDEBUGMODE_SINGLE_SHADOW || _DebugLightingMode == DEBUGLIGHTINGMODE_LUX_METER || _DebugLightingMode == DEBUGLIGHTINGMODE_LUMINANCE_METER)
    246.         return;
    247. #endif
    248.  
    249.     // TODO: do not recompute this, but rather pass it directly.
    250.     // Note1: remember the hacked value of 'posInput.positionWS'.
    251.     // Note2: we do not adjust it anymore to account for the distance to the planet. This can lead to wrong results (since the planet does not write depth).
    252.     float fogFragDist = distance(posInput.positionWS, GetCurrentViewPosition());
    253.  
    254.     if (_FogEnabled)
    255.     {
    256.         float4 volFog = float4(0.0, 0.0, 0.0, 0.0);
    257.  
    258.         float expFogStart = 0.0f;
    259.  
    260.         if (_EnableVolumetricFog != 0)
    261.         {
    262.             bool doBiquadraticReconstruction = _VolumetricFilteringEnabled == 0; // Only if filtering is disabled.
    263.             float4 value = SampleVBuffer(TEXTURE3D_ARGS(_VBufferLighting, s_linear_clamp_sampler),
    264.                                          posInput.positionNDC,
    265.                                          fogFragDist,
    266.                                          _VBufferViewportSize,
    267.                                          _VBufferLightingViewportScale.xyz,
    268.                                          _VBufferLightingViewportLimit.xyz,
    269.                                          _VBufferDistanceEncodingParams,
    270.                                          _VBufferDistanceDecodingParams,
    271.                                          true, doBiquadraticReconstruction, false);
    272.  
    273.             // TODO: add some slowly animated noise (dither?) to the reconstructed value.
    274.             // TODO: re-enable tone mapping after implementing pre-exposure.
    275.             volFog = DelinearizeRGBA(float4(/*FastTonemapInvert*/(value.rgb), value.a));
    276.             expFogStart = _VBufferLastSliceDist;
    277.         }
    278.  
    279.         // TODO: if 'posInput.linearDepth' is computed using 'posInput.positionWS',
    280.         // and the latter resides on the far plane, the computation will be numerically unstable.
    281.         float distDelta = fogFragDist - expFogStart;
    282.  
    283.         if ((distDelta > 0))
    284.         {
    285.             // Apply the distant (fallback) fog.
    286.             float3 positionWS = GetCurrentViewPosition() - V * expFogStart;
    287.             float  startHeight = positionWS.y;
    288.             float  cosZenith = -V.y;
    289.  
    290.             // For both homogeneous and exponential media,
    291.             // Integrate[Transmittance[x] * Scattering[x], {x, 0, t}] = Albedo * Opacity[t].
    292.             // Note that pulling the incoming radiance (which is affected by the fog) out of the
    293.             // integral is wrong, as it means that shadow rays are not volumetrically shadowed.
    294.             // This will result in fog looking overly bright.
    295.  
    296.             float3 volAlbedo = _HeightFogBaseScattering.xyz / _HeightFogBaseExtinction;
    297.             float  odFallback = OpticalDepthHeightFog(_HeightFogBaseExtinction, _HeightFogBaseHeight,
    298.                 _HeightFogExponents, cosZenith, startHeight, distDelta);
    299.             float  trFallback = TransmittanceFromOpticalDepth(odFallback);
    300.             float  trCamera = 1 - volFog.a;
    301.  
    302.             volFog.rgb += trCamera * GetFogColor(V, fogFragDist) * GetCurrentExposureMultiplier() * volAlbedo * (1 - trFallback);
    303.             volFog.a = 1 - (trCamera * trFallback);
    304.         }
    305.  
    306.         color = volFog.rgb; // Already pre-exposed
    307.         opacity = volFog.a;
    308.     }
    309.  
    310.     // Sky pass already applies atmospheric scattering to the far plane.
    311.     // This pass only handles geometry.
    312.     if (_PBRFogEnabled && (posInput.deviceDepth != UNITY_RAW_FAR_CLIP_VALUE))
    313.     {
    314.         float3 skyColor = 0, skyOpacity = 0;
    315.  
    316.         // Convert it to distance along the ray. Doesn't work with tilt shift, etc.
    317.         float tFrag = posInput.linearDepth * rcp(dot(-V, GetViewForwardDir1(UNITY_MATRIX_V)));
    318.  
    319.         EvaluatePbrAtmosphere(_WorldSpaceCameraPos.xyz, V, tFrag, false, skyColor, skyOpacity);
    320.         skyColor *= _IntensityMultiplier * GetCurrentExposureMultiplier();
    321.  
    322.         // Rendering of fog and atmospheric scattering cannot really be decoupled.
    323. #if 0
    324.         // The best workaround is to deep composite them.
    325.         float3 fogOD = OpticalDepthFromOpacity(fogOpacity);
    326.  
    327.         float3 fogRatio;
    328.         fogRatio.r = (fogOpacity.r >= FLT_EPS) ? (fogOD.r * rcp(fogOpacity.r)) : 1;
    329.         fogRatio.g = (fogOpacity.g >= FLT_EPS) ? (fogOD.g * rcp(fogOpacity.g)) : 1;
    330.         fogRatio.b = (fogOpacity.b >= FLT_EPS) ? (fogOD.b * rcp(fogOpacity.b)) : 1;
    331.         float3 skyRatio;
    332.         skyRatio.r = (skyOpacity.r >= FLT_EPS) ? (skyOD.r * rcp(skyOpacity.r)) : 1;
    333.         skyRatio.g = (skyOpacity.g >= FLT_EPS) ? (skyOD.g * rcp(skyOpacity.g)) : 1;
    334.         skyRatio.b = (skyOpacity.b >= FLT_EPS) ? (skyOD.b * rcp(skyOpacity.b)) : 1;
    335.  
    336.         float3 logFogColor = fogRatio * fogColor;
    337.         float3 logSkyColor = skyRatio * skyColor;
    338.  
    339.         float3 logCompositeColor = logFogColor + logSkyColor;
    340.         float3 compositeOD = fogOD + skyOD;
    341.  
    342.         opacity = OpacityFromOpticalDepth(compositeOD);
    343.  
    344.         float3 rcpCompositeRatio;
    345.         rcpCompositeRatio.r = (opacity.r >= FLT_EPS) ? (opacity.r * rcp(compositeOD.r)) : 1;
    346.         rcpCompositeRatio.g = (opacity.g >= FLT_EPS) ? (opacity.g * rcp(compositeOD.g)) : 1;
    347.         rcpCompositeRatio.b = (opacity.b >= FLT_EPS) ? (opacity.b * rcp(compositeOD.b)) : 1;
    348.  
    349.         color = rcpCompositeRatio * logCompositeColor;
    350. #else
    351.         // Deep compositing assumes that the fog spans the same range as the atmosphere.
    352.         // Our fog is short range, so deep compositing gives surprising results.
    353.         // Using the "shallow" over operator is more appropriate in our context.
    354.         // We could do something more clever with deep compositing, but this would
    355.         // probably be a waste in terms of perf.
    356.         CompositeOver(color, opacity, skyColor, skyOpacity, color, opacity);
    357. #endif
    358.     }
    359. }
    360.  
    361. #endif
    362.  
    And this code only for fog(!) computation. What about shader instructions lighting, shadows, posteffects, etc. ?
     
  26. koirat

    koirat

    Joined:
    Jul 7, 2012
    Posts:
    2,068
    Can I ask where did you get this code from ?
    I'm especially interested in "OpticalDepthHeightFog" that is not present in your quote.
     
  27. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
    You can add the unity hdrp folder to c# solution and use f12/ctrl+f for searching. It's very helpfully for reverse engineering.
    upload_2021-10-29_0-41-9.png

    upload_2021-10-29_0-42-55.png
     
    Matjio and koirat like this.