Search Unity

  1. Unity 2020 LTS & Unity 2021.1 have been released.
    Dismiss Notice

Unity 2D Renderer in Universal Render Pipeline in 2019.3

Discussion in '2D Experimental Preview' started by rustum, Feb 17, 2020.

  1. PHI_Game_Dev

    PHI_Game_Dev

    Joined:
    Jul 8, 2020
    Posts:
    1
    Hi,

    Today I used the the Light2D component to create a flickering sprite light.

    It works great but there is a little issue, in order to to create the effect I change the sprite and the volume opacity at runtime. These properties do not have a setter so I used reflection as a workaround to modify the fields directly.

    Is there a reason why these properties are in read only ? I looked at the GetMesh function and it looks like it's possible to change these properties at runtime. Thanks in advance for your reply.

    Light2D.png
     
  2. hzetta

    hzetta

    Joined:
    Mar 27, 2019
    Posts:
    1
    Did you find a way to fix this in builds?
     
  3. berdanka

    berdanka

    Joined:
    Sep 29, 2015
    Posts:
    9
    Do I win anything in performance from moving sprites that use Mask (in Sprite Lit Shader master node) to a separate Tileset/Texture?
    I have one huge texture with 256 tiles, but only a small part of this texture (less than 20 tiles) writes to the green channel of the 2D Renderer mask. I've been thinking about moving them to another texture and using two shaders (with and without writing to the mask) but I don't know if it's worth the time.
     
  4. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    468
    I took some time to spy on the 2D depth mesh branch as I was curious if I was going to need to write my own. https://github.com/Unity-Technologies/Graphics/compare/2d/depth-mesh

    First, let me say THANK YOU for making this optimization a priority! I think this will help fill rate a ton for Lost Crypt and other 2D renderer examples where we can skip rendering/blending transparent pixels that are obscured by opaque portions sprites. I think this feature is worth waiting for an official implementation.

    The feature requires a tight mesh, which is understandable. However, I am begging you to also do an optimization pass on the mesh that the tight mesh algorithm produces. It's horribly inefficient in many common use cases. The auto-generated tight mesh Sometime 4-10x the verts that I feel a tight mesh needs. For many cases I'd gladly take a less-accurate tight mesh for fewer triangles. There's an outline definer in the sprite editor (which I just found out today!) to define the tight mesh. However, if we need to rely on the poorly implemented auto-tight mesh for the opaque geometry the problem is still present and a concern. Maybe have an accuracy slider if having the user define the opaque mesh is too difficult?
     
    Last edited: Jul 15, 2020
    Chris_Chu likes this.
  5. Lo-renzo

    Lo-renzo

    Joined:
    Apr 8, 2018
    Posts:
    731
    @Ferazel Where are you able to get away with opaque? I'd like to use more opaque but I find it hard to find cases where I can easily do so, so I've got gobs of transparent. What strategies are you using to maximize your opaque usage?
     
  6. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    468
    Obviously, it depends on the game/artstyle/usage/etc. However, most 2D image data can be broken down into an opaque mesh and a transparent mesh. Where the transparent mesh is primarily the edging. So from what I understand of the feature that they're working on is that when you import a sprite there will be a checkbox to generate an depth mesh for the sprite based on the opacity of the original image data. During the 2D SRP there is a render pass that will render the depth meshes into the depth buffer. Then you render the transparent full-shaded sprite performing ztests of the pixels against the depth buffer. This in most cases will cutdown on overdraw and fillrate dramatically as only the topmost opaque pixels will need to be shaded and only the edges of the sprite geometry would need to be transparent (but now ztested) and blended.
    upload_2020-7-15_16-47-46.png
     
    yuanxing_cai and Lo-renzo like this.
  7. Lo-renzo

    Lo-renzo

    Joined:
    Apr 8, 2018
    Posts:
    731
    @Ferazel If I understand right, that would broaden what the definition of a sprite is vs now? If it has depth mesh, a "sprite" plugged into SpriteRenderer has two meshes and two materials associated with it? I could see this helping a lot for my scenes because many trees are in front of one another and I need some transparency but not all.
     
  8. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    I think depth mesh will be mostly hidden from the user so I dont think there will be differences in the sprite renderer. But I am hoping to be able to see this in the sprite editor.

    In sprite editor you can already define custom outline for sprite shape and physics shape. I think there will be additional one for depth shape.

    However, I do agree that we need a bit better shape generator, especially for low res pixel art style.

    I am already using depth buffer to cull my sprites, but having an depth map to selectively cull on the sprite based on mesh is welcoming feature indeed.

    But everyone listen! There is another Idea.

    Having said above, I really wished there was method to do this with additional depth map instead of mesh... where transparent part of depth map doesnt write to depth buffer and then write some based on the alpha value of the map, but even if it can only do toggling type I would gladly take that over mesh approach. Writing to depth buffer based on secondary depth map is much much more artist friendly and easier way to control the depth writing than mesh approach. Because we can get a pixel accuracy effect without needing a super accurate tight mesh. We already have the secondary texture support, do why not make use of that? I am guessing I can already do this if I add additional material to the sprite renderer but that is another drawcall..(maybe there will be extra draw call anyway? We will see....)

    Or even better, have combinamtion of both mesh and map approach fused together to do ultimate optimization with pixel accuracy! Depth mesh make use of secondary depth map from sprite editor to do clip on the pixels that requires depth writing. This will reduce texture fetching, with pixel accurate depth writing controlled by artists using the tools they are familiar with!

    I actually think without using additional map to do pixel accurate depth writing, you would need a super pixel accurate mesh or get some artifacts around edges? Depth buffer is not only useful for cutting down the overdraw, but it is essential if you want to mix both gpu indirect instancing rendering and sprite renderer.
     
    Last edited: Jul 16, 2020
  9. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    468
    @Lo-renzo @castor76 has it right. This depth mesh would likely be completely transparent to the Unity user (hopefully we can access/edit it in the sprite editor). No extra materials or meshes to wrangle with rendering. The depth pass would be part of the 2D render pipeline. While I agree that a texture map would be more accurate, the drawcall of the depth geometry should be extremely fast without fragment shading. Since the depth pass uses all the same shader the depth meshes could be batched and sent in one draw. For an optimization you want it to be as fast as possible IMO lest it cut into the possible benefit of the optimization. It is adding an additional draw that we didn’t have before. For my use cases a depth mesh would help a ton depending on the quality of the generated mesh. I am looking forward to seeing the results.
     
  10. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    I could be wrong, but I am guessing that internally, writing separate depth pass will generate extra draw call anyway. and if we use Atlas ( like all other Sprites ) extra depth map info texture will also be in Atlas so there is no cost for not able to batch the depth pass. Depth mesh approach without texture accuracy is just not unleashing its fullest potential.
     
  11. elZach

    elZach

    Joined:
    Apr 23, 2017
    Posts:
    40
    Haven't looked into it further. I'll probably won't look further into the 2D Renderer in general until @Chris_Chu and team have solved the performance issues they are working on and feedback in this thread has a point again.
     
  12. Wollbobaggins

    Wollbobaggins

    Joined:
    Mar 26, 2020
    Posts:
    25
    wow huge thanks, thought i was going crazy!
     
  13. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    I wrote my own sprite meshing code so that we could properly implement the depth mesh and it happens to be a significant improvement over what is built into unity at the moment. I decided to reuse the triangulation library for the moment, but that needs to be replaced as well.

    I'm glad to hear there is so much interest in this branch, but I want to just say for people who might be interested in it, that unfortunately, it requires a version of unity that is not out and will require a wait for this feature to be ready.
     
  14. JoaoSantos

    JoaoSantos

    Joined:
    Mar 31, 2014
    Posts:
    20
    I manipulated the FreeForm of the 2D Light to create a polygon in the limits of the collision enter.

    But its doesn't the better solution of my problem, because I want use the properties existentes in the Point type, for example.

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4. using UnityEngine.Experimental.Rendering.Universal;
    5.  
    6. # if UNITY_EDITOR
    7. using UnityEditorInternal;
    8. # endif
    9.  
    10. [ExecuteAlways]
    11. [RequireComponent(typeof(Light2D))]
    12. public class Light2DAdaptative : MonoBehaviour
    13. {
    14.     [Header("Custom Attributes")]
    15.     [SerializeField]
    16.     [Min(3)]
    17.     private int sides = 4;
    18.  
    19.     [SerializeField]
    20.     [Min(0.01f)]
    21.     private float radius = 1f;
    22.  
    23.     [SerializeField]
    24.     [Range(0, 359)]
    25.     private int angle = 45;
    26.  
    27.     [SerializeField]
    28.     private LayerMask layersFilter;
    29.  
    30.     [Header("Base Attributes")]
    31.  
    32.     [SerializeField]
    33.     [Min(0)]
    34.     private int lightOrder = 0;
    35.  
    36.     [SerializeField]
    37.     [ColorUsage(false)]
    38.     private Color color = Color.white;
    39.  
    40.     [SerializeField]
    41.     [Range(0, 1f)]
    42.     private float shadowIntensity = 0f;
    43.  
    44.     private Light2D currentLight;
    45.  
    46.     public Light2D Light
    47.     {
    48.         get
    49.         {
    50.             if (this.currentLight == null)
    51.             {
    52.                 this.currentLight = GetComponent<Light2D>();
    53.                 this.currentLight.lightType = Light2D.LightType.Freeform;
    54.             }
    55.             return this.currentLight;
    56.         }
    57.     }
    58.  
    59.     private void OnValidate()
    60.     {
    61.         ChangeProperties();
    62.         TryResizeLight();
    63.     }
    64.  
    65.     private void OnEnable()
    66.     {
    67. #if UNITY_EDITOR
    68.         ComponentUtility.MoveComponentUp(this);
    69. #endif
    70.         TryResizeLight();
    71.     }
    72.  
    73.     private void Update()
    74.     {
    75.         TryResizeLight();
    76.     }
    77.  
    78.     private void ChangeProperties()
    79.     {
    80.         Light.lightOrder = this.lightOrder;
    81.         Light.color = this.color;
    82.         Light.shadowIntensity = this.shadowIntensity;
    83.     }
    84.  
    85.     private void TryResizeLight()
    86.     {
    87.         var lightNaturalPoints = this.Light.shapePath.Length;
    88.  
    89.         if (this.sides != lightNaturalPoints)
    90.         {
    91.             this.sides = lightNaturalPoints;
    92.             Debugs.Log("Create or reduce more points in Freeform light 2d component to see the correct draw form");
    93.         }
    94.  
    95.         ResizeLight();    
    96.     }
    97.  
    98.     private void ResizeLight()
    99.     {
    100.         var localRadius = this.radius;
    101.  
    102.         var sides = this.sides;
    103.         var baseAngle = Mathf.PI * 2 / sides;
    104.  
    105.         var offsetAngle = Mathf.Deg2Rad * this.angle;
    106.  
    107.         for (int i = 0; i < sides; i++)
    108.         {
    109.             var localAngle = baseAngle * i;
    110.             var position = new Vector3(Mathf.Cos(localAngle + offsetAngle), Mathf.Sin(localAngle + offsetAngle), 0f) * localRadius;
    111.  
    112.             RaycastHit2D value = Physics2D.Raycast(transform.position, position, localRadius, this.layersFilter);
    113.  
    114.             var relativePosition = transform.localPosition;
    115.             if (value.collider != null)
    116.             {
    117.                 Vector3 transportPoint = transform.InverseTransformPoint(value.point);
    118.                 relativePosition += transportPoint;
    119.             }
    120.             else
    121.             {
    122.                 relativePosition += position;
    123.             }
    124.  
    125.             this.Light.shapePath[i] = relativePosition;
    126.         }
    127.  
    128.     }
    129.  
    130. }
    131.  
    View attachment 662436

    With that, I can manipulate the polygon precision, changing the side number. With that, I can change the shadow, because I don't project the light in the wall. I create a custom polygon that create the light until the wall.

    [OBS]
    - I have errors with that, when the light throut the wall collider
     
  15. Lo-renzo

    Lo-renzo

    Joined:
    Apr 8, 2018
    Posts:
    731
    @Devs: is a 2D Directional Light coming either as its own light or as an option on 2D Global Light? Or is a huge Point Light what's intended?
     
  16. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    I hate to give such a vague answer, but its something that will be coming I'm just not sure when.
     
  17. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    For the cases such as pixel art games, imagine the depth mesh has to be 100% pixel accurate. That might be way too many polygons... but having said that pixel art accurate mesh generation for both rendering mesh, colliders,and depth mesh sound interesting.
     
    GliderGuy and GilbertoBitt like this.
  18. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    I don't want to focus too much about this feature until we are closer to done but right now the depth mesh isn't 100% pixel accurate (though it would be simple to make it that way). In its current implementation, the sprite is analyzed for regions of opaque pixels which are used to create meshes that conservatively approximate those regions.
     
  19. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    This is why I think depth mesh will not work propely in practice without pixel accuracy mesh or map to conpensate for its inaccuracy. We need option to make either make it pixel accurate mesh or supply depth mask map.
     
  20. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    468
    Hmm as someone who is working on a high res 2D game an automatic conservative mesh works great for me. If the system used a texture to define the pixel mask it would make things much less efficient. No way I can make all my sprites fit onto a single atlas to make a single batch. I also don’t want to spend the rendering time in the depth pass to do any texture lookups or swapping textures atlases with this pass. I also don’t want to spend time managing another mask texture for every sprite.

    Thinking about pixel art games though, I wonder if it would be that bad if it is a conservative mesh? My understanding is the goal of this feature is to cut down on fillrate. If it is a conservative algorithm the generated mesh will not render into transparent areas. I’m guessing this would still be helpful if pixel art games assuming you have some texel resolution there. Maybe not to the same extent as a full resolution 2D game, but it might help a little. Are you expecting to use the depth mesh for something else other than fast fillrate depth testing improvements @castor76?
     
  21. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    Assuming that the depth mesh is writing to the normal depth buffer, any rendering that uses depth will be affected by this. Which means that any effects that uses depth buffer value such as depth of field or ssao or any other special effects will look wrong. Also things like gpu instanced rendering solely relies on the depth value for sorting because of the way it renders on the gpu does not automatically sort by any 2d rendering sorting method that Unity uses, it becomes impossible to be used along depth mesh if not pixel accurate even for the non pixel art 2d. There are other possibilities that may arise because of this..

    Talking about the atlas size, you could reduce the size of atlas and still benifit from using map. If not used with another map, the system can still use the normal sprite atlas and use some alpha cut off value and still benifit from even better depth culling. Reading another pixel from the map then could be free as you have alread have done so. Even if it is not a free texture reading I bet better and more accurate culling performs better than the overdraw especially with high res 2d.

    Also if pixel accurate mesh is possible to be generated for the depth mesh, there is no reason why this should not be an option.

    So when generating depth mesh we should have option to choose the pixel accuracy as well as alpha value in which the mesh is conservating.
     
    Last edited: Sep 7, 2020
    GliderGuy likes this.
  22. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    468
    Yeah, our needs are different. I am solely looking for a fillrate improvement not to use other depth effects.

    I still feel a you’re not understanding the high res workflow. You’re assuming that all my sprites even fit on an atlas which many do not. I am working with a lot of draw calls and increasing the fill rate to do an alpha clip test in the fragment would not be desirable. Particularly, on mobile TBDR hardware.

    The texture-less depth mesh has a lot of advantages for performance. It is an extremely fast draw call (not all draw calls are equal). Single shader with no fragment, no texture binds/lookups, just a blast of triangles of all of the visible sprite depth meshes in the same vertex buffer onscreen to the GPU.

    Naturally, I don’t want to dismiss your workflow. I think the best answer is to allow the user in the sprite editor to define the depth mesh shape. If a user needs a more custom/accurate shape they can define it for themselves. However, for my project’s needs I’d take the automatic perf focused workflow sooner. In particular if they are upgrading their sprite shape/tessellation code. Obviously, I’m not sure on timeframes, but if it means I can maybe get this in 2020 LTS rather than wait for a custom editor for 2022 I’d personally take it ASAP.
     
    Last edited: Sep 8, 2020
    GliderGuy and Lo-renzo like this.
  23. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    Yes , not all the projects needs are the same, that is why we should have option on mesh generation to fit everyone's needs.

    It is not uncommon to mix 3D and 2D sprites nowdays. Having non pixel accuracy will leads to horrible result in this case.
     
    Last edited: Sep 8, 2020
    GliderGuy and Ferazel like this.
  24. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    What kinds of updates can we expect from urp 10 as far as 2drenderer goes? Any news on this?
     
  25. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    We are mainly focused at the moment on making quality improvements to the 2D renderer. This means bug fixing, performance improvements, and UI/UX improvements. We will be adding something like _CameraOpaqueTexture (it will probably be called something different) as this is somewhat performance related.

    Having said that, I don't know what will make it into Universal RP 10 vs Universal RP 11 as there isn't a release schedule for future releases of URP 10.

    The only feature I can say won't make it for sure into 10, is the depth mesh feature, as it requires changes to the Unity engine.
     
  26. Ferazel

    Ferazel

    Joined:
    Apr 18, 2010
    Posts:
    468
    Well that saddens me, but I appreciate you letting us know.
     
    GliderGuy and Chris_Chu like this.
  27. EGA

    EGA

    Joined:
    Sep 12, 2014
    Posts:
    94
  28. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    This is on the list of things to do. We will start more seriously looking into improving our shadows probably in a month or so. However, I can't say which releases you will start seeing these changes in.
     
    GliderGuy likes this.
  29. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,087
    I am sure it can differ from case to case, but I generally think the priority for the added feature should be :

    1. Emission Channel. Any rendering pipeline that gets affected by lighting can't be complete without Emission channel.
    2. Opaque texture. This is really base feature too.
    3. Soft Shadow.
     
    Xiao_Xu, AzureMasters and GliderGuy like this.
  30. pahe

    pahe

    Joined:
    May 10, 2011
    Posts:
    495
    @Chris_Chu Do you know what the plan is about combining Hybrid Renderer and the 2D Renderer (i.e. making them work together or have only one Renderer which can support both)? Or should I ask that in the ECS / DOTS Forum? I'm using the ECS currently for my game, but also want to use the 2D lighting features, but I can only use one renderer in my project, so I have to chose which feature I want to have.
     
    GliderGuy and NotEvenTrying like this.
  31. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    I didn't realize URP is able to do hybrid rendering. I'll look into this.
     
    Last edited: Nov 1, 2020
    GliderGuy and pahe like this.
  32. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    So I talked to @Ted_Wikman and he mentioned that you will not see a performance benefit using Sprite Renderers in DOTS because DOTS basically just wraps the Sprite Renderer in a way that can be accessed from DOTS. So from the sound if it, you can just use the 2D Renderer.

    I believe his suggestion was that if you need DOTS for other things than rendering, that you can do your DOTS stuff in a subscene and work with your Sprite Renderers outside that subscene.

    He has a thread here about using 2D Entities in non-Tiny games. If you have any additional questions I'd suggest you ask them there.

    Just one final thing, out of curiosity, would you be able to tell me a little about your project and why you want to use DOTS? We don't get too many 2D users asking for this, so anything you are willing to share would be really appreciated.
     
    Last edited: Nov 4, 2020
    GliderGuy and pahe like this.
  33. pahe

    pahe

    Joined:
    May 10, 2011
    Posts:
    495
    Thank you for that info. I'll take a look into that thread and ask there for more help.

    Of course. I'm developing a 2D adventure game. For lighting we're going to use the new 2D lighting feature with normal maps and a custom heightmap addition to the shader.

    This is one of our characters with lighting:
    flip.gif

    Most of the game is using the ECS/DOTS for instantiating GOs and handling systems. From my tests, I was able to use the Entities for instantiation and they were working correctly with the Hybrid Renderer. Once I switched over to the 2D Renderer though, nothing was displayed anymore until I switched back again.

    I searched a bit if this should be possible at all, found only a couple of posts about it (mostly by me ^^):
    here
    here
    here here

    So, if
    EntityManager.Instantiate(Entity characterEntity)
    would work also with the 2D Renderer (and as I mentioned, it didn't in my tests, I had to use the Hybrid Renderer for that), it would be great, though I wouldn't understand why there are 2 different renderers then.

    Does that make sense to you?
     
    GliderGuy and EvOne like this.
  34. Foriero

    Foriero

    Joined:
    Jan 24, 2012
    Posts:
    559
    Is URP 2D Renderer ready for Production?
     
  35. yuanxing_cai

    yuanxing_cai

    Unity Technologies

    Joined:
    Sep 26, 2014
    Posts:
    335
    It will be some time next year.
     
    GliderGuy likes this.
  36. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    Sorry for not getting back to you sooner. Thanks for posting about your project, and I will have to look into what you are describing further.
     
    pahe likes this.
  37. _watcher_

    _watcher_

    Joined:
    Nov 7, 2014
    Posts:
    231
    Can't see the shader in the selection combo box when i try to add it into the list.. infact i cant see any 'universal render pipeline' shaders.. any ideas why that happens? Building for Android.

    EDIT: Solved. Had to manually go into the URP folder in the project, in my case this was in
    Code (CSharp):
    1. Packages\Unversal RP\Shaders\Utils
    find the FallbackError shader and manually drag and drop it in. This works, but of course produces an extra Draw Call (since the new fallback shader can not batch with others). The error seems to be related to 2dRenderer (happens in new scene URP with 2DRenderer and camera only).
     
    Last edited: Dec 19, 2020
  38. Holygoe

    Holygoe

    Joined:
    Mar 3, 2015
    Posts:
    18
    Will 2d Renderer support scene color node?
     
    EvOne and SquaLLio like this.
  39. ThundThund

    ThundThund

    Joined:
    Feb 7, 2017
    Posts:
    161
    Is it planned to add emissive color support anytime soon (without having to use the Bloom effect and other tricks)?
     
  40. small-U

    small-U

    Joined:
    May 10, 2013
    Posts:
    16
    in URP 2d Render pipeline How to distinguish lighting types in shaders。In buit-in pipeline _WorldSpaceLightPos0.w==0 directional light 。_WorldSpaceLightPos0.w==1 other light. or
    #ifdef POINT #ifdef SPOT
     
    Last edited: Dec 26, 2020
  41. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    With the way we are rendering lighting, there isn't a way to do this. All the lights are rendered into a lighting buffer, which is then used by the lit sprite shader.

    What are you interested in doing?
     
  42. pastaluego

    pastaluego

    Joined:
    Mar 30, 2017
    Posts:
    119
    How would _CameraOpaqueTexture for 2D Renderer work with the depth mesh feature? Would the opaque pixel submeshes of the transparent sprites be included in that texture?
     
  43. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    There are no plans to support _CameraOpaqueTexture. We have instead created a new shader variable called _CameraSortingLayersTexture which will contain layers rendered from back to front, up until a specified layer. The upcoming release of URP 11 should have it.
     
  44. Holygoe

    Holygoe

    Joined:
    Mar 3, 2015
    Posts:
    18
    What node will be able to use for this? Is it scene color or another one? Or I am absolutely don't understand what happens :) What article or book you recommend for a proper understanding of the rendering pipeline.
     
  45. twicegamesoriginal

    twicegamesoriginal

    Joined:
    Jul 19, 2018
    Posts:
    1
    Hi, i'm using the 2D Render Pipeline but when I try to use the shadows they appear strange, they are in low resolution and I don't know how to fix this problem. Someone know why?
     
  46. Holygoe

    Holygoe

    Joined:
    Mar 3, 2015
    Posts:
    18
    Maybe, you check shadows on the scaled plane
     
  47. suatss

    suatss

    Joined:
    Jul 30, 2015
    Posts:
    2
    when 2D renderer drops out experiment label and ready to use in production?
     
    Xiao_Xu likes this.
  48. NotEvenTrying

    NotEvenTrying

    Joined:
    May 17, 2017
    Posts:
    26
    Does that mean there are no plans to support depth of field and numerous other fundamental effects that use depth, that people have been asking for over 2 years now?
     
    RemDust likes this.
  49. Chris_Chu

    Chris_Chu

    Unity Technologies

    Joined:
    Apr 19, 2018
    Posts:
    182
    No. It unrelated to depth writing
     
  50. jeffweber

    jeffweber

    Joined:
    Dec 17, 2009
    Posts:
    613
    Hello, based on the following Unity Video:
    (At around 20:25) I'm trying to create the 2d water distortion affect using _CameraSortingLayerTexture.

    I think I have everything setup correctly, but it seems like the _CameraSortingLayerTexture in my shader graph is not rendering correctly. I'm using Unity 2021.1.b11 and URP is 11.0

    I've looked at this for a long time but can't figure out what I'm missing. If I manually set the "CameraSortingLayerTexture" in the shader for the material, the effect seems to work, it's just not picking up the built in "_CameraSortingLayerTexure"

    Here is my setup.... sorry for all the images. Any ideas?

    Shader Graph:
    ShaderGraph.PNG

    2D Renderer
    2DRenderer.PNG

    WaterSprite (Rectangle)
    WaterSprite.PNG

    InWaterSprite (Red Ball)
    InWaterSprite.PNG

    Camera
    Camera.PNG
     
unityunity