Search Unity

Official New BatchRendererGroup API for 2022.1

Discussion in 'Graphics for ECS' started by joelv, Jan 26, 2022.

  1. VincentBreysse

    VincentBreysse

    Unity Technologies

    Joined:
    May 31, 2021
    Posts:
    27
    Another important general note: There are many known issues that can cause the BRG peformances to be much worse than GameObjects ones currently.

    For example, we recently discovered that shadow caster culling for point lights is simply not working at all when rendering with a BRG, at least for HDRP. The culling planes we receive in the culling callbacks are setup in a way that causes the shadow caster culling to always pass. So you effectively end up rendering all the BRG objects in the scene to the shadow maps. From our recent discoveries, it might also affect directional shadows. There are probably other issues like this we aren't even aware of at the moment. Especially around the culling code.

    We are currently working on fixing all those things. But until it's done, beware of the perf measurements you get when using a BRG. There are many rough edges that could cause performances to drop because of various stupid things.
     
  2. YuriyPopov

    YuriyPopov

    Joined:
    Sep 5, 2017
    Posts:
    237
    @VincentBreysse I will profile the shadow caster rendering on Monday. Since my case is on URP perhaps the issue you mentioned is not affecting me. I dont see any shadow culling being done in the BRG renderer over at the gpudriven branch. How would I go about culling the shadow caster besides checking distance in the culling job and changing the shadowcasting based on that ?
     
  3. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    So if it's 2022.1 integration with hybrid renderer will land in same time as DOTS 1.0?
     
  4. YuriyPopov

    YuriyPopov

    Joined:
    Sep 5, 2017
    Posts:
    237
    So I did more test with RenderDoc, no shadow casting lights, no shadow rendering option on all renderers. On URP Single Pass Instanced the BRG is about 0.5ms slower on both the CPU and GPU. So I decided to do a non-XR test just as a sanity check. Exact same scene BRG get a average of 630 fps (1.7ms frame time) and disabling it leads to a average of 250fps ( 4 ms frame time). So I would say that perhaps there are some problems with XR rendering and BRG.
     
    hippocoder likes this.
  5. VincentBreysse

    VincentBreysse

    Unity Technologies

    Joined:
    May 31, 2021
    Posts:
    27
    @YuriyPopov I think we might expect XR to be a bit slower, but not by such a big margin. We would probably need to investigate this.
    Also did you test with the picking fix I mentioned before ? This one is by far the most important. It causes each object to be rendered with a single draw command, so you effectively get no instancing at all.
     
  6. YuriyPopov

    YuriyPopov

    Joined:
    Sep 5, 2017
    Posts:
    237
    Indeed I'm using the script from the master branch that contains the picking fix. I suppose this is why the non XR has a significant improvement when using the BRG. I also saved two renderdocs for XR version. Would you like me to upload them somewhere ?
     
  7. VincentBreysse

    VincentBreysse

    Unity Technologies

    Joined:
    May 31, 2021
    Posts:
    27
    @YuriyPopov Thanks for taking the time to test all that stuff! So yeah that sounds like we might have a performance problem with XR on our side. The best you could do is sending us a bug report and link your captures there. It would make it easier for us to track the issue.
     
  8. YuriyPopov

    YuriyPopov

    Joined:
    Sep 5, 2017
    Posts:
    237
    1404871
    I isolated it into a simpler project. Did a quick test, results are the same. Hope it helps :)
     
  9. Paulsams

    Paulsams

    Joined:
    Feb 14, 2020
    Posts:
    6
    Hello everyone. Has anyone checked whether this will work with 2D URP? I just have a problem with that Graphics.DrawMeshInstanced does not want to draw anything other than the standard URP Unlit (not Sprite Unlit), and this does not suit me at all, because I still want to Sprite Lit.
     
    NotaNaN and officialfonee like this.
  10. VincentBreysse

    VincentBreysse

    Unity Technologies

    Joined:
    May 31, 2021
    Posts:
    27
    Currently you can't use any 2D URP shader pass with a BRG. Those shader passes don't handle the DOTS_INSTANCING_ON keyword which is necessary for a BRG to render anything.

    AFAIK it should be doable to get the 2D URP shader passes to work with a BRG if you are willing to put some extra efforts into modifying the shaders yourself. I'm not aware of anything in the BRG API itself that would prevent you from doing 2D rendering.

    We haven't investigated this ourselves though, so as always there might be some unknown blockers. Also 2D rendering is not our focus for the moment so you might not receive a lot of support from our side regarding this.
     
  11. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    @joelv Just curious. Worse CPU performance is limited by GLES 3.0 issue or BRG API?
     
  12. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    It's more of a Hybrid Renderer and Unity thing. As it is now the Hybrid Renderer is designed to have a big buffer where only changed per instance data is changed every frame. This data is scatter written using a set of compute shader dispatches from some upload buffers.

    With GLES 3.1 we need to limit batch sizes since we need to read the data from a UBO instead of an SSBO. The UBO has a limitation so a single draw command can only access a limited amount of data. We can still use the compute shader upload path here though, and can use a big buffer we bind at an offset to the draw calls.

    For GLES 3.0 support though data cannot be written using a compute shader (obviously) so it would need to use a glBufferSubData path. It means we would need to maintain two different upload systems in the hybrid renderer. We would also needs to spend time optimizing the GraphicsBuffer.SetData path probably as that is not super efficient as it is now. Possible, but it does complicate things and we need to weigh the priorities.

    If you use the BRG raw it will probably be fully possible to write something like this though.
     
    NotaNaN and optimise like this.
  13. Deleted User

    Deleted User

    Guest

    So does this mean that no performance gain out of the box in normally placed objects in the scene like this https://mobile.twitter.com/SebAaltonen/status/1408146055850598414 ??? So does it work like graphics.drawmeshinstanced and indirect ones where we have to write a script and spawn the mesh instances procedurally??? Sorry if I am asking the same questions again..... I haven't yet tried it out
     
  14. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    This is the API that Sebastian used for that project, but it does require you to write code currently (or base it on the RenderBRG code in our tests which is the actual code Sebastian wrote when he was tweeting that). And yes it is like DrawMeshInstanced, a lot more complicated but also a lot more powerful. We will be building new technology on top of this and hopefully make it more accessible in the future.
     
  15. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Thank you. For GameObject projects also? I ask because I don't really have the time or energy to do so myself, life being what it is being a small dev. Thanks for any consideration.

    Perhaps Gigaya project is a suitable test!
     
    ontrigger, NotaNaN, mariandev and 2 others like this.
  16. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Our teams focus is the hybrid renderer so I can only speak about that. But we have written this to be a stand alone and supported interface and hope it will become usable both externally and internally.
    Regarding the hybrid renderer I know it is a big dependency to pull in but even here it is possible we can be usable even for non DOTS/Entities projects. You can create you static visual world, put it in a sub scene and it will render using the hybrid renderer (and therefore the BRG) without you having to write a single line of code. That's our goal at least. If it is feasible is probably up to each project in the end.
     
  17. YuriyPopov

    YuriyPopov

    Joined:
    Sep 5, 2017
    Posts:
    237
    You can check out the gpudriven branch over at the graphics repo. It has a complex brg implementation that does work with gameObjects. It even relies on engine side api so you know they are working on it rather seriously.
     
    bb8_1, TerraUnity and hippocoder like this.
  18. Anon117

    Anon117

    Joined:
    Oct 19, 2012
    Posts:
    5
    I would kill for this. A voxel project I was working on died due to not being able to efficiently draw n different meshes with one single material. Even simple naive cases of of ~1000 chunks ends up being a major problem currently. If you could draw more, smaller meshes you could support a 16 bit mesh index format and also do much more intelligent culling on those tiny 16x16x16 chunks like is done in modern voxel games.
     
    JesOb likes this.
  19. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Multi draw is sadly not easily implementable cross platform as it is now. Worst case it will fall back to a CPU draw dispatch loop and no way to specify the multi draw count late from a compute shader. But we are working on something like this. Exactly what shape it will take and when it will ship is still not decided.
     
  20. JesOb

    JesOb

    Joined:
    Sep 3, 2012
    Posts:
    1,109
    May be make it only available for Vulkan, Dx12 and Metal
    Drop it for OpenGL entirely

    In my opinion it is better than dont support it at all

    Same with Mesh Shaders, Add support for it on capable platforms because it is already available for many years and current generation consoles have but Unity dont.

    Again better to have it only for supported platforms than dont have it at all

    Just my 5 cents :)
     
    Sylmerria, NotaNaN and joelv like this.
  21. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Turned out to be more than feasible, thanks!
     
    joelv likes this.
  22. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    An update regarding GLES 3 support.

    Work has now landed and the interfaces will be available in the next alpha (2022.2.0a13). If you don't plan to use GLES 3 with the BRG in your projects you should not need to do anything.

    If you plan to support GLES 3 you will need to create the batch buffer differently for that platform, as the GLES backend will bind this as an UBO instead of an SSBO. Use BatchRendererGroup.BatchBufferTarget to check for this. You will also need to use the new overload for AddBatch which takes an offset and a size, where the offset has to be a multiple of
    SystemInfo.constantBufferOffsetAlignment and the size has to be less or equal to SystemInfo.maxConstantBufferSize.

    This means that the batches can't be arbitrary in size. You will have to write some kind of batch splitting logic as well to keep the size down. Examples of this will land to the public Graphics repo once the mirroring does its magic.

    Usage of this interface will be landing to the Hybrid Renderer shortly and will be available for the Entities 1.0 release. Here the requirement will be GLES 3.1, as we are still using some compute shaders.
     
  23. graskovi

    graskovi

    Joined:
    May 28, 2016
    Posts:
    14
    I'd also be very interested in this for my project! In my case I'm using relatively simple static geometry with mesh colliders that don't move at runtime. Currently my gameplay logic uses lots of Physics.Raycast calls and other standard physics methods so I'd like to hear what the best option is for getting the static geometry to get converted. I can think of the following few options:

    1. Put the static geometry into a subscene in such a way that only the renderer gets converted to an entity, and the GameObject mesh renderer is destroyed but the mesh collider sticks around.

    Not sure if this is even possible, ideally the transform would be added to the created entity and kept for the mesh collider, the mesh renderer and material data would get added to the entity and be destroyed on the GameObject, and the mesh collider would stay on the GameObject. I'm not sure if this is possible as the GameObjectConversionUtility class only has a ConvertGameObjectHierarchy function but I haven't found any information on this kind of partial entity conversion, and the convert and inject entity option wouldn't work as keeping the GameObject mesh renderer around would defeat the whole purpose of this, and I'm pretty sure GameObjects in subscenes get completely destroyed, no way to preserve just their mesh colliders.

    2. Put all of the static geometry into a subscene and convert all of my physics queries to use DOTS physics.

    Pros: once the setup is done this should be guaranteed to have good performance (this project is not for mobile) and my workflow for creating geometry can stay about the same, just put into a subscene instead of a scene.

    Cons: this would require converting every single Physics.Raycast call I have to a DOTS compatible version. The only way I can think of feasibly doing this without rewriting a massive amount of my codebase would be to create a static utility class, I'll call it DotsPhysicsUtilities, that takes the same arguments as Physics.Raycast and whatever other methods and overloads I'm using, then change each Physics.Racast call to DotsPhysicsUtilities.Raycast and the other various equivalents. Not great for physics performance but that should be fine.

    3. Author the static geometry subscenes separately from the GameObject physics colliders.

    This is not at all feasible for me, but it's the only other option I can think of.



    What would the best option be here? I'd also be curious to hear from @hippocoder how you managed to achieve this.
     
  24. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    @joelv Awesome. Did the performance and compatibility testing has been started for different kind of phones specially for problematic phone like Huawei phones? From what I know some phones not working really well at compute shaders.
     
  25. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    We are testing and we are seeing a bit better performance on GLES as it uses an UBO path instead of SSBO. That said the way the hybrid renderer is written it relies on compute shaders (a few dispatches at the beginning of the frame), the persistent GPU storage of instance data and instancing. This will continue to be slow paths on some hardware and it will be next to impossible to get away from that without a full rewrite. There will probably cases on some devices where game objects will render faster than entities in an identical scene due to this.

    We are also tracking down a few bugs regarding worse performance than expected so hopefully there will be additional improvements in the future.
     
    NotaNaN and hippocoder like this.
  26. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Not that I target Quest 2 any more (it'll be long in the tooth by the time I'm done) but out of curiosity, did you test this device, and if so how did it perform? Ditto for switch. I'm using Entities nowadays.
     
  27. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Sorry, I don't have any numbers for those specific devices. Functionality has been tested from what I can hear and we are ramping up performance testing and fixes. They are in the device support matrix for 1.0 so work will be done on ensuring things works on them. We have quite a lot of platforms to cover for a small team though.
     
    Occuros, NotaNaN and hippocoder like this.
  28. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    If I understand correctly what u mention is GPU side right? The CPU side that working on preparing data for GPU side to render I guess still can get huge performance boost?
     
  29. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Yes the GPU side is a bit so so on certain devices still. On the CPU side we should be seeing quite good numbers, but the difference to GO path is not as big as it is on other platforms.
     
    optimise likes this.
  30. DDKH

    DDKH

    Joined:
    Jun 13, 2013
    Posts:
    25
    Hi! Few questions:
    1) Will HR be updated along with the 0.51 release and get some fixes/improvements? For example, it would be great if it will support reflection probes in URP (currently it does not work with lightmaps)
    2) Are there any plans to publish new HR versions between 0.51 and 1.0 ?
     
  31. ThatDan123

    ThatDan123

    Joined:
    Feb 15, 2020
    Posts:
    11
    All the links in the original post are broken
     
  32. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    HR will stay as it is for the 0.5x streams (and anything lower than 1.0 the way things looks now). A new version with a bit better feature support will be released for the 1.0 stream and will require unity 2022.x.
     
    Anthiese likes this.
  33. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Thanks for pointing it out. Some thing broke during the move of graphics code. I am waiting for some processes to finish and will update the links as soon as possible (though you should be able to find everything but the RenderBRG.cs script by some repository browsing)
     
    Occuros likes this.
  34. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    Hi. Is there any plan to do a full rewrite in future?
     
  35. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    A rewrite is currently not planned no. Basing the data model on what works best for the phones will probably make things worse over all for the rest of the platforms. Though we will try to do mobile specific optimizations and mobile specific data paths where it makes sense.
     
    jiraphatK, NotaNaN and optimise like this.
  36. Deleted User

    Deleted User

    Guest

    But the your first post says its completely rewritten for 2022.1 version??? I am confused:confused:
     
  37. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    The underlying interface is completely rewritten (the parts that are in Unity, not in the package). Most of the hybrid renderer is still there and uses the same data model as before. What we have improved on is the ability to make many more things be able to batch together. Note that the hybrid renderer version using this interface has not yet been released.

    Rewriting the data model is a much bigger task
     
    NotaNaN and Deleted User like this.
  38. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    For rewriting the data model, do u mean fully rewrite the whole graphics backend to be full dots graphics in future that we no longer need to use hybrid renderer package anymore?
     
  39. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    No what I mean with data model in this case is how we go from data in the components to data usable in the shaders. Matrices here is a good example. Each renderable entity has a matrix attached to it and that is needed to get read by the vertex shader to place the mesh in the correct position.

    Traditional unity game object code paths will copy this matrix for every instance every frame to the GPU just before the drawcall. Add to this lot of other per draw or global data and the cost quickly adds up.

    The Hybrid Renderer model is that all data is persistent on the GPU. We only upload the matrix whenever is has actually changed, and we do this my packing things into a big GPU buffer. This gives us a lot of CPU time back, and for most GPUs there is no access penalty when doing things this way (and since we can instance draw automatically with this setup we can actually get better performance in many cases). Some GPU does have a harder time reading from an arbitrary buffer location though and are seeing a bit of a penalty.
     
    NotaNaN and Deleted User like this.
  40. kite3h

    kite3h

    Joined:
    Aug 27, 2012
    Posts:
    197
    Currently, many companies are doing cluster preculling using compute shaders.
    Not only Unreal's Nanite, but also Sony developers and Activision are using this method.
    There are various names such as Meshlet, GeometryFX, etc., but I want to know what advantages and disadvantages BRG has compared to that method, which does not use CPU and uses GPU driven culling and LOD.

    If the HLODs supports Nested , I wouldn't worry too much about the current situation.
    However, isn't efficient culling of the big-scale world impossible at this time?
     
    Last edited: Jun 3, 2022
  41. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    We have a way forward here and want to explore GPU driven rendering as well. The BRG will at some point (hopefully soon) get procedural and indirect draw support. We did not have time to take it all the way in this rewrite since we where focusing on the current features hybrid renderer needs. With the coming extensions we can explore GPU driven there and you can as well in custom renderers, much easily and more efficient than what you can with the current Unity interfaces (I hope).

    Regarding HLOD that is probably a question someone else has to answer
     
    JoNax97, hippocoder, NotaNaN and 2 others like this.
  42. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    727
    I have done a decent amount of work in GPU driven rendering in Unity, and what I found is that there are a lot of issues outside of just the rendering that have to be handled, like computing the mesh clusters at editor time, having a capable texture streaming setup so that you can put everything in the same texture atlases to avoid having 1 draw call per material (or else you lose a lot of the benefit of GPU driven rendering). On top of this, you gotta make sure it actually runs a lot faster, as the code complexity has to be justified (there are many reasons it could be slower : vertices are already pretty cheap on modern hardware, having to rebind and recull meshes so you don't get leaks from previous frame depth culling, having to use an uber shader, slow memory access patterns because of large unsorted buffers, etc). Dealing with shaders is also rough because you either need an uber shader, or some way to partition out the screen to only render certain parts depending on the shaders used there, but then that further complicates things. Some of these things you can setup just fine on your own like the uber shader, but some you have to do some hacks that hurt performance like with the texture streaming because Unity just doesn't have the APIs currently to do this nicely.

    IMO GPU driven rendering isn't the end-all solution like it is often toted to be, and even complex scenes may still be more performant using a more traditional rendering pipeline.

    However, I would be extremely interested (as would a lot of people I think) to see a Unity-official GPU driven render pipeline.
     
  43. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Links have now been fixed and point to the last place before some restructure happened. I will try to update links again once GLES3 work is fully mirrored as well, but using that code will require 2022.2 alpha anyway
     
    Anthiese likes this.
  44. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I use Entities / Hybrid renderer. Would the above mentioned indirect support be something I need to worry about when it comes, or are we looking at woo hoo even more free speed boosts? Because I'm down for that!

    A real question though: You say that hybrid / entities store the matrices for rendering but is it as efficient as a regular compute shader providing their positions aren't changing?

    Basically what's the best behaviour I can do today with DOTS/Entities to ensure optimal performance for things that move infrequently?
     
    NotaNaN likes this.
  45. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    The interface improvements are for BRG. We can potentially then use it for hybrid renderer if we find a way to do that. However I think it will be more useful for custom rendering.

    Hybrid Renderer stores two 3x4 float per instance (or four if you need motion vectors IIRC). And in the static case we do no work except loading it into the shader. Sure there are probably a bit more compact ways of storing things but this is what we need to support the general case. And you would have to run on a _very_ bandwidth constrained platform to get any saving to go for something more compact (which also would discard a lot of flexibility).

    A bit OT: To get maximum performance out of Hybrid Renderer you would want to have your static entities separated from your dynamic entities in the chunks (you can do this by some manual static/dynamic tag component). This way you will minimize transfers to the GPU when dynamic entities move.
     
  46. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Great info, thank you. So I can basically get ballpark DrawInstancedIndirect performance in Hybrid/Entities if they're marked static if I understood correctly!
     
  47. JussiKnuuttila

    JussiKnuuttila

    Unity Technologies

    Joined:
    Jun 7, 2019
    Posts:
    351
    In general DrawInstancedIndirect is not really related to performance directly (with possible exceptions on some very specific platforms), it's more about enabling the use of the GPU for things like culling, which can then bring performance improvements.

    Tagging entities as static is not related to DrawInstancedIndirect either. Instead, what the tagging does is it ensures that static entities will not be placed in the same chunks as non-static entities, and this makes it possible for the Hybrid Renderer to skip those chunks very efficiently when their transforms don't change. The Hybrid Renderer keeps the transforms on the GPU, and it will only need to update them when a change is detected.
     
  48. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Even more rich information! I always wondered why my DrawInstancedIndirect stuff scaled badly. I guess I thought saving some time on transforms / gameobjects was basically it, and of course that would not scale well outside of Entities. Makes things a lot clearer. Also I found it strange when my CPU time would rise back when I was doing DrawMeshInstancedIndirect and didn't really know why. I must have been submitting a lot of data each frame even unchanged.

    On another note, how recent is the static information? I read from staff elsewhere that it was going to be less important to flag things as static in Entities/hybrid going forward. Perhaps they merely meant that there were improvements for the non static stuff.
     
  49. Paulsams

    Paulsams

    Joined:
    Feb 14, 2020
    Posts:
    6
    I will ask a question a little off topic, but I created a thread ( https://forum.unity.com/threads/sprites-are-not-drawn-in-urp-2d-via-graphics.1287935/ ) and couldn't get any response. Can you at least tell me where I can find out how you draw sprites in Hybrid Renderer? Or maybe you can suggest that what alternative should be used to shove the rendering into the SRP pass, other than Graphics? Or can you tell me how SpriteRenderer does it himself? Because, as I described in the thread above, Graphics.DrawMesh turns out to be much more expensive even with SRP Batcher, and with DrawInstanced it does not draw anything at all except URP/Unlit. I tried to dig into the Hybrid Renderer package, but I couldn't get to the code that I would be interested in. And even if I can make DOTS_INSTANCING_ON keyword from my shader, I will limit myself greatly to the fact that I will have to write shaders only through the code and I will also force users of my package to write them if they need a specific shader.
     
    Last edited: Jun 13, 2022
  50. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    A bit tricky yes. Hybrid renderer will soon be using the API described in this forum post, and that will only be available in Unity 2022.1 and up. It will require specific shaders since we have a data model that allows you to do per instance overrides in a very flexible and performant way. Not all of the builtin unity shaders have been converted to support this setup yet though but we are slowly getting there. Your best bet is to use a shader graph based on your SRP of choice to get compatibility.

    Sprite renderer uses interfaces not available in C# as it ties heavily into the lower level parts of the engine to be efficient. But I suspect it could be written using the batch renderer group instead now that it is available. It is quite an involved task though.

    If you do want to render bullet hell sprites I would suggest you look at the BRG Raw example scene which shows you how to procedural draw a lot of instanced things. I hope this helps at least a little bit.
     
    Anthiese, NotaNaN and hippocoder like this.