Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Feature Request Features I implemented in my URP fork and I wish they were in the engine

Discussion in '2D Experimental Preview' started by AlexVillalba, Jun 14, 2021.

  1. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    (I've re-posted this thread as I cannot move it, I think this is a better place for it)

    I forked the Github repository of the URP and spent around 1 week, in total, to implement several features in the 2D Renderer that were required for Jailbroken to look visually acceptable. As a first step, I stopped using Shadergraph and converted all shaders to HLSL/Cg.

    Multiple render targets

    In order to be able to achieve some post-processing FXs, I needed 2 additional render targets to which to write for all the sprites in the scene. I modified the ScriptableRenderer, Renderer2D and Renderer2DData classes.

    Now I can specify additional render targets using the inspector:

    E1tDdXqXsAMZZk3.jpg

    Emissive colors

    This is implemented in the shaders. If the Unlit checkbox is checked, I ignore the light textures and use colors of the main texture of the sprite directly; if it's unchecked, then I mix the main color, the light color and the colors coming from a secondary texture (chosen in the Sprite editor). The emissive color depends also on a power multiplier and a color that affects all emissive pixels.

    upload_2021-6-6_0-31-1.png

    Additionally, emissive pixels are written to the Bloom render target.

    Bloom FX limited to emissive colors

    One thing I didn't like of the official implementation of the Bloom FX in Unity is that it affected everything. This may be desirable sometimes, though. I wanted that only shiny things were affected so I added another render target to write to, and passed that texture to the existing post-processing FX (PostProcessPass class), in the Renderer2D class.

    upload_2021-6-6_0-42-38.png

    Configurable alpha blending

    Sometimes you need to change how colors are blended. A common case is when creating energy or fire VFX. I added this possibility to my shaders and used a custom shader inspector to display the options in a friendly way. This is a built-in feature for normal URP Shadergraph shaders, but does not exist for the 2D Renderer.

    upload_2021-6-6_0-47-47.png

    Custom post-processing FXs

    I created new post-processing FXs (which is not possible with the 2D Renderer at the moment) by modifying the Renderer2D class and copying and reducing the PostProcessPass class.
    For example, I used displacement maps (to simulate shockwave FXs) written when the sprites are rendered and the used that texture in a post-processing FX to calculate the colors:



    Here you can see 2 of them combined (vignette with texture and distortion):

    upload_2021-6-6_1-47-24.png



    Revived Freeform lights Falloff offset in v2021

    The Falloff offset feature of the 2D lights was removed in version 2021. I need it in my project so in order to be able to upgrade the editor, I revived it. If you need it, you can take the commit from here:
    https://github.com/QThund/Graphics/...a75f0e1c86cdf6b13f88af8d77f04301e9&diff=split

    Light volume textures

    I wanted to fake the visual effect of the light on an illusory mass of smoke. To achieve this, I modified the 2D lights (Light2D, RendererLighting and Light2DEditor classes, and the Volumetric versions of the Light2D shaders). Now I can add animated textures to the lights and combine them as needed to simulate whatever I want:





    Shadow rendering optimization

    Do you use ShadowCaster2Ds and want the performance of your game drastically boosted? I wanted that too.
    I created a class derived from ShadowCaster2D that takes all the meshes of the ShadowCaster2Ds in the child objects and blends them together into one. Then all the ShadowCaster2Ds are disabled or destroyed. So when the shadows are to be rendered, there is only 1 draw call instead of 1 per shadow caster (which, in fact, produces many more than 1). It's like what I would have expected from the CompositeShadowCaster2D, a way to group shadow casters.
    I got a boost of +50% of performance in some cases, although it depends on the scene setup.
    Here is the demonstration:



    [THE LIST CONTINUES IN A POST BELOW]

    That's all for now!
    Other things I shared with the community that you may find useful:
     
    Last edited: Jul 1, 2022
  2. NotaNaN

    NotaNaN

    Joined:
    Dec 14, 2018
    Posts:
    325
    I realize that the 2D URP team is ridiculously understaffed — however, I still find it disturbing that in one week you implemented features that the community have wanted for an entire year. :eek:

    Amazing job, @ThundThund. ;)
    Maybe one day 2D URP will be open source and you can be the person to maintain it for us? :p
     
    xiao-xxl likes this.
  3. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Thank you @GliderGuy . If they pay me for that, we can start talking :D
     
    NotaNaN likes this.
  4. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Shadow softening

    Many people complain about the aliasing of the 2D shadows. I have applied gaussian blur to the light texture before it is blended so they now look as soft as I want (I get the best result by blurring them just a little). This does not affect the volume of the lights as they are rendered in a different way that does not allow me to apply the same technique.



    Reducing 2D Light volume banding

    If you have using 2D lights with volume opacity > 0, you may have seen the ugly banding effect in the gradients. You can mitigate this by applying some dithering. It darkens the light a bit but it's worth.



    [THE LIST CONTINUES IN A POST BELOW]
     
    Last edited: Sep 14, 2021
    PutridEx, bali33, Charlivi and 3 others like this.
  5. pahe

    pahe

    Joined:
    May 10, 2011
    Posts:
    543
    Man, awesome work. I hope those things get merged into the master at some point!
     
    AlexVillalba and NotaNaN like this.
  6. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Thank you pahe! I hope Unity guys decide to add these features officially in the near future, as they are not hard to do (if you know the codebase). I think they just need a bit of pressure by the dev community, currently they don't see the 2D Renderer as a priority.
     
  7. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Per-camera post-process FXs (not Volume blending)
    Currently with the 2D Renderer, if you want to apply different PPFXs to the UI and the game, in the same frame, you cannot. You can stack cameras and you can make different Volumes affect different cameras, but when the frame is rendered, the VolumeManager will blend all the VolumeProfiles from the top overlay camera to the base camera. Any change you do in the Volume of the overlay camera will be added to the geometry rendered by the base camera.

    I have modified the renderer so I can render all the UI stuff on a RenderTexture, in a separate camera (no stacking) and the UberPost shader so it writes the alpha channel appropiately. The bloom FX was using the alpha value of the original texture, so if the texture was totally transparent in a texel, the color of the "light irradiation" was not added but discarded (the texel remained transparent). Instead of that, I calculate the luminance of the color produced by the bloom FX and add it to the original alpha, so when the RenderTexture is used in a RawImage, the "light irradiation" is visible and blended with the underlying colors.


    [THE LIST CONTINUES IN A POST BELOW]
     
    Last edited: Jun 7, 2022
    Arcaniell, NotaNaN and pahe like this.
  8. pahe

    pahe

    Joined:
    May 10, 2011
    Posts:
    543
    Man, I hope you continue to post the stuff! I've started as beginner in gfx programming, but I'm learning quite a lot by just following you :D
     
    AlexVillalba and NotaNaN like this.
  9. PutridEx

    PutridEx

    Joined:
    Feb 3, 2021
    Posts:
    1,136
    Been messing around with 2d for fun and out of curiosiity, and testing performance.
    Nice work on the shadow perf improvements (+soft shadows) and other stuff, looks cool! :D
     
    NotaNaN and AlexVillalba like this.
  10. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Thank you!
     
  11. Zephus

    Zephus

    Joined:
    May 25, 2015
    Posts:
    356
    Has Unity reached out to you to merge these into master? Because I'm kind of baffled how quickly you got all of this working and that there's not a single answer from anyone at Unity in here.
     
    NotaNaN likes this.
  12. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Hi Zephus. No, I've not been asked to do that. We cannot expected that the Unity dev team implement such features as quick as a guy that just downloads the github repo and modify it for himself, I mean, it's a big company and the engine is used by thousands, they must follow some bureaucratic process to assure it is compatible, secure, etc. HOWEVER, the problem with the incredibly slow development of the 2D Renderer is not related to such process, it has to do with the lack of resources the company is dedicating to it (just a pair of developers). They just don't care, as a company. That's my assumption.
     
  13. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,068
    I'd pay for this if it was available in the asset store or anywhere else really and assuming it would be kept up to date with the latest LTS release of Unity.
     
    NotaNaN likes this.
  14. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Honestly, I don't know if that's even legal.
     
    NotaNaN likes this.
  15. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,068
    I didn't think that far lol.
     
  16. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Custom sprite pivot in Sprite Light2Ds
    Currently, Sprite Light2D uses the center of a sprite as the source of the light, disregarding the pivot of the sprite, so shadows are cast using that center as origin.
    I have modified how the Sprite Light2D generates its mesh so the position of the source of the light will coincide with the pivot you adjust in the sprite editor. This is more handy and, in my opinion, the behaviour most people expect from this type of light.

    Observe how the pivot point is the source of the light:
    Light2D Sprite Pivot.gif
    Scene:
    upload_2022-6-7_14-26-21.png
    Sprite editor:
    upload_2022-6-7_14-27-38.png
     
  17. mmankt

    mmankt

    Joined:
    Apr 29, 2015
    Posts:
    49
    great work. Unity should hire your to the 2d team :D
     
  18. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Bloom post-processing FX artifacts (flickering) FIXED

    By fixed I mean that, in the case of my game, artifacts are practically imperceptible. I think it's impossible to get rid of what may be intrinsic to every Bloom technique, so I prefer to be prudent.

    This is how it looked like before and after the fix:

    (Using DirectX, in Unity 2020.3.4)

    How

    To achieve this I had to change the code of the URP, so you would need to fork the Graphics code repository of Unity (the full source code of the URP), import it into your project replacing the built-in package and change the code.

    The changes I did were few and simple. In PostProcessPass.cs, search for the SetupBloom() method. In the first 2 lines, remove the ">> 1", so the size that is stored in tw and th is the source texture's instead of its half.
    That's it. Compile. Enjoy.

    Code (CSharp):
    1. void SetupBloom(CommandBuffer cmd, int source, Material uberMaterial)
    2. {
    3.       // Start at half-res
    4.       int tw = m_Descriptor.width;// >> 1;
    5.       int th = m_Descriptor.height;// >> 1;
    Why

    The value stored in the variables tw and th is used to generate temporary render targets to perform the Bloom technique (downsampling first, then upsampling, applying gaussian blur). Before this happens, the algorithm in SetupBloom() performs an operation they call "prefilter", which copies the content of the source texture (the texture used while rendering geometry, where the color and brightness related to the Bloom FX are stored) to the first texture that feeds the downsampling loop.

    Code (CSharp):
    1. // Prefilter
    2. var desc = GetCompatibleDescriptor(tw, th, m_DefaultHDRFormat);
    3. cmd.GetTemporaryRT(ShaderConstants._BloomMipDown[0], desc, FilterMode.Bilinear);
    4. cmd.GetTemporaryRT(ShaderConstants._BloomMipUp[0], desc, FilterMode.Bilinear);
    5. Blit(cmd, source, ShaderConstants._BloomMipDown[0], bloomMaterial, 0);
    In the loop, the content of the texture is processed and copied to another texture whose size is half of the previous, and this reduction happens N times (which depends on the original value of tw and th). Without our changes, in the "prefilter" step the algorithm mixes the preparation of the content of the first texture with a first downsampling (halving), saving some GPU power, instead of copying first and downsampling afterwards.

    The format of the source texture (R8G8B8A8_UNORM) is different from the format of the textures used in the process (R11G11B10_FLOAT).
    The shader used in the process is called "Bloom.shader", which is stored in the bloomMaterial variable, and contains 4 passes: Prefilter (0), Blur horizontal (1), Blur vertical (2) (both used in the downsampling step) and Upsample (3).
    The content of the Prefilter pass and the blur passes is obviously not the same. Apart from the maths used in the code, the difference is that it is sampling texels from a texture with a different format (less precise) than the destination texture's, applying a bilinear filter (as in the other steps). Somehow, when we get rid of the bilinear filtering (as both textures are equal in size, thanks to our changes) the texels are read and converted properly, and then the Bloom process continues without problem. In other words, the value of each texel of the source texture is not interpolated; since source texture doubled the size of the destination, the UV of the texel [2, 4] in the destination corresponded to the texel[4,8] in the source texture (skipping texels in the column 3 and row 7), but the final value was interpolated among the 8 texels surrounding [4, 8], deforming it and, I guess, losing information.
    Honestly, I haven't studied the code of the shader deeper to fully understand how is the data lost, since I don't have more time for this.
    Feel free to add your ideas and conclusions.
     
    tspk91, PanthenEye, Rukhanka and 2 others like this.
  19. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Post-process FXs per camera without stacking

    I have made a small change in the 2D Renderer so I can use more than one camera that culls different layers, each of them with a different post-process FX volume asset configuration, in such a way that the result of rendering with a camera (after applying its PPFX) is used as a base for the next camera (with higher Priority). This implies that every PPFX applied to the next camera also affects the previous rendered image, although FXs like Bloom are not additive (Bloom in next camera does not apply to previous image).
    I do not have to set any camera as base or overlay, which affects other components like Pixel Perfect camera.
     
    samanabo, Rennan24 and NotaNaN like this.
  20. AlexVillalba

    AlexVillalba

    Joined:
    Feb 7, 2017
    Posts:
    346
    Kobix, bigy, samanabo and 2 others like this.