Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We’re making changes to the Unity Runtime Fee pricing policy that we announced on September 12th. Access our latest thread for more information!
    Dismiss Notice
  3. Dismiss Notice

Graphics.DrawMeshInstanced

Discussion in 'Graphics for ECS' started by Arathorn_J, Jun 26, 2018.

  1. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    For my particular game, I need as much CPU performance I can get because I have a lot of other things that are CPU bound. Your idea of deleting the extra entities might work. A few days ago I tried running the master branch, but once I updated the packages, there were a bunch of compiler errors from all the obsolete jobs code. This morning I tried it with the 2020.1 branch... it appears that whomever made that branch had HRV2 enabled, but I wasn't able to get the gpu baked characters to display. The runtime error was "TargetParameterCountException: Number of parameters specified does not match the expected number." and it refers to this function:

    Code (CSharp):
    1. var method = typeof(RenderMesh).Assembly.GetType("Unity.Rendering.MeshRendererConversion", true).GetMethod("Convert", BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.Public);
    2.             method.Invoke(null, new object[]{entity, manager, system, renderer, bakedData.NewMesh, materials});
    I'm not totally sure on all the details, but it looks like they're pulling an undocumented internal convert function from the assembly and that function's signature has changed over the years. I'm hesitant to put too much time into something so hacked. There are a couple of other repos that I'm playing with right now and if I can't get one of them to do what I need, I'll circle back to the joeante one.
     
  2. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Even though editing shaders is a piece of cake to a lot of the guys here, I was quite proud of my first shader edit ;). The only catch is that it can't do more than one animation per material... but it would be great for trees blowing in the wind or something.
     
  3. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,906
    It changed this year when they added lightmap support in 2020.2. But yes, it is unnecessarily hacky and I completely agree with your newfound reasons to avoid it. Thanks for trying it anyways! It gives me more motivation to take a stab at creating a clean solution.
     
  4. elJoel

    elJoel

    Joined:
    Sep 7, 2016
    Posts:
    125
    Thanks for the uploading the branch awesome. Got my archers working. There's quite some stuff I had to do.
    • My archer was using a separate Bow Mesh I had to add this to Skinned mesh using Blender (Select Mesh, SHIFT+Select Armature, CTRL+P => “Automatic Weights” then fix the Weights in Weight Paint mode)
    • My archers used more than one material so I used MeshBaker to bake these materials into just one.
    • Using this skinned mesh I tried to bake using the Animation Clip Texture Baker, this failed
    • What then finally worked was --> Setting Root Bone in Skinned Mesh Renderer (MeshBaker removed that), adding an Animator with a simple Controller and with the correct avatar.
    • And then using Animator Texture Baker I was able to create a working baked animation material.
    To trigger the animation I just simply set "delta time" defined in the material. I am calling this from OnUpdate from SystemBase.
    Here's the code if anyone's interested:

    Code (CSharp):
    1. public class ArcherEasyLaunchSystem : SystemBase
    2. {
    3.     private EntityCommandBufferSystem _ecb;
    4.     private double _timeToWait = 5;
    5.     private double _waitTime = 0;
    6.     private double _triggerTime = 0;
    7.  
    8.     protected override void OnCreate()
    9.     {
    10.         _ecb = World.GetOrCreateSystem<EndSimulationEntityCommandBufferSystem>();
    11.     }
    12.     protected override void OnUpdate()
    13.     {
    14.         var elapsedTime = Time.ElapsedTime;
    15.         var commandBuffer = _ecb.CreateCommandBuffer().AsParallelWriter();
    16.         var randomArray = World.GetExistingSystem<RandomSystem>().RandomArray;
    17.         if (elapsedTime > _waitTime)
    18.         {
    19.             _waitTime = elapsedTime + _timeToWait;
    20.             var material = ArcherSettings.Instance.Material;
    21.             material.SetFloat("_DT", (float)elapsedTime);
    22.             _triggerTime = elapsedTime + 0.3d;
    23.         }
    24.  
    25.         var triggerTime = _triggerTime;
    26.  
    27.         Entities.WithAny<MovementDestinationReached, ActionTriggered>().ForEach(
    28.             (int entityInQueryIndex, int nativeThreadIndex, ref ArcherEasy archerLaunch, in Translation translation) =>
    29.             {
    30.                 if (elapsedTime > archerLaunch.WaitTime)
    31.                 {
    32.                     archerLaunch.WaitTime = triggerTime + archerLaunch.TimeToWait;
    33.                     var instance = commandBuffer.Instantiate(entityInQueryIndex, archerLaunch.Arrow);
    34.                     var goalTranslation = new float3(translation.Value.xyz) + archerLaunch.ThrowFrom.Value;
    35.                     commandBuffer.SetComponent(entityInQueryIndex, instance, new Translation
    36.                     {
    37.                         Value = goalTranslation
    38.                     });
    39.                     var random = randomArray[nativeThreadIndex];
    40.                     commandBuffer.SetComponent(entityInQueryIndex, instance, new PhysicsVelocity
    41.                     {
    42.                         Linear = archerLaunch.ThrowForce.Value * random.NextFloat(0.7f, 1) +
    43.                                  random.NextFloat(-0.1f, 0.1f)
    44.                     });
    45.                     randomArray[nativeThreadIndex] = random;
    46.                 }
    47.             })
    48.             .WithNativeDisableContainerSafetyRestriction(randomArray).ScheduleParallel();
    49.         _ecb.AddJobHandleForProducer(Dependency);
    50.     }
    51. }
    52.  
    Code (CSharp):
    1.     public class ArcherSettings : MonoBehaviour
    2.     {
    3.         public static ArcherSettings Instance;
    4.         public Material Material;
    5.  
    6.         void Awake()
    7.         {
    8.             Instance = this;
    9.         }
    10.     }
    11.  

    I'm also wondering how one would efficiently change the animation to a different baked animation for a given Entity.

    Also the URP Shader is unlit, would be very nice to have it lit (Lit shader exists for Built in). If anyone takes the time to create the lit shader variant, please tell.
     
    Last edited: Dec 17, 2020
    lclemens likes this.
  5. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Thanks for that code! I like the idea of using delta time to trigger. Yeah, that root bone thing got me once too.

    I agree - Lit would be better. I would consider tackling it, but Animation Texture Baker (ATB) has a bigger problem that I am trying to tackle first. That problem is exactly what you just mentioned - how do we efficiently change between different animations for an entity. ATB outputs only one material+texture per animation clip. Arathorn_J mentioned that his texture baker is more advanced - it can bake all the animations into a single texture. From my understanding, he plays an individual animation by telling his shader to only play a certain section of that baked texture. I think that's how most of the other texture baking systems work as well.

    So to get ATB to be able to play more than one animation I'm debating between two different approaches:

    ==== METHOD 1 ====
    Take Arathorn_J's approach of combining all animations into a single texture. This would mean modifying the baking script so that it appends all the data into a single texture. I don't think that would be difficult. The next step would be modifying the shader so that it can accept parameters that specify which subsection of the texture to play and whether or not to loop. That part will be rather difficult (for me anyway, since I don't know how to write HLSL). The ATB project does have a shader graph version in the project, but I wasn't able to get it working. I haven't really used shader graphs before.
    ==================

    ==== METHOD 2 ====
    Use Animation Texture Baker (ATB) as-is, with one material per animation and then swap out materials at runtime to change the animation. I suspect it would have to be done in the main thread. So maybe something like this:
    Code (CSharp):
    1. var renderMesh = entityManager.GetSharedComponentData<RenderMesh>(selectedEntity);
    2. renderMesh.material = wizardAttackMaterial;
    3. EntityManager.SetSharedComponentData(selectedEntity, renderMesh);
    I'm not 100% sure, but I think the mesh renderer is just a reference so it should be fast even if it is on the main thread. One down-side is that it could use more memory compared to the other approach and would require keeping track of more files. One advantage is that it probably wouldn't be to difficult to implement. It would be sorta weird though - for an attack it would switch the material from loop-walk to the attack material for just the right number of seconds, and then back to idle or run or whatever.
    ==================

    There is one thing I'm concerned about with though - Materials are shared. So if the first character is half way through an Attack and then a second character starts an Attack, will the second one begin half-way through the attack? I know HRV2 supports per-instance properties... but then that begs the question, what will happen to memory and GPU if all 100,000 entities have unique per instance properties? That seems like it might defeat the whole point of draw-mesh-instanced. Does Method 1 run into this problem too?

    Has anyone here purchased "Mesh Animator" or "GPU Animation Baker" from the asset store and attempted to convert it to be used from ECS? I asked the author of Mesh Animator if it could be used with ECS and he said something to the effect of "not without a lot of work".
     
  6. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    What I like about this repo as a whole is the core concept of baking the texture and using that data in the shader is very straightforward - most people will want to customise/build on this technique anyway so having a simple core is important.

    I managed to extend it to multiple animations with a transition blend value in a lit URP shader fairly easily. I'm doing that with an extra animation position + normal texture for now just to see it work but can change it so multiple animations are baked into a single texture later, should be easy enough.

    Still not 100% sure I understand my cbuffers from my instancing macros in the shader and puzzled why you can mix the two in the unlit shader but I didn't manage to do the same in my lit shader.

    I was coming from another GPU texture sample where the actual baking code was a mess plus ~10x the size of this one. I had previously thought of abandoning it but sunken cost fallacy was in full force :)
     
    Last edited: Dec 17, 2020
  7. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Wow that's so cool! Any chance you could share that multi-animation URP lit shader and the multi-animation baking code with elJoel and I? I feel like we're so close to getting a usable project and you have the last two pieces!
     
  8. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    Yea sure. Disclaimer: this is not going to be the most efficient way almost certainly and it's a bit messy as I've literally just taken another URP lit shader and jammed the texture anim code in. eg. you don't have to use a texture per an anim, as you say above you could bake all the animations into a single texture.

    I just wanted to see a proof of concept of setting a dots material property and see it altering the characters between running and standing.

    I'd consider it a learning or stepping stone to you how you could approach modifying the shaders rather than anything you'd definitely want to use longterm.

    Regarding the baking stuff, the original code should output a normal and tangent texture already. You just need to set the normal texture for each animation for the shader below.

    Code (CSharp):
    1.  
    2. Shader "Universal Render Pipeline/TextureAnimPlayer_Lit_Diff_GpuInstance"
    3. {
    4.     Properties {
    5.     [MainColor] _BaseColor("Color", Color) = (1.0, 1.0, 1.0, 1)
    6.     [MainTexture] _BaseMap("Albedo", 2D) = "white" {}
    7.         _Smoothness ("Smoothness", Float) = 0.5
    8.  
    9.         [Toggle(_ALPHATEST_ON)] _EnableAlphaTest("Enable Alpha Cutoff", Float) = 0.0
    10.         _Cutoff ("Alpha Cutoff", Float) = 0.5
    11.  
    12.         [Toggle(_NORMALMAP)] _EnableBumpMap("Enable Normal/Bump Map", Float) = 0.0
    13.         _BumpMap ("Normal/Bump Texture", 2D) = "bump" {}
    14.         _BumpScale ("Bump Scale", Float) = 1
    15.  
    16.         [Toggle(_EMISSION)] _EnableEmission("Enable Emission", Float) = 0.0
    17.         _EmissionMap ("Emission Texture", 2D) = "white" {}
    18.         _EmissionColor ("Emission Colour", Color) = (0, 0, 0, 0)
    19.  
    20.     // Animation texture properties
    21.         _PosTex("position texture", 2D) = "black"{}
    22.         _NmlTex("normal texture", 2D) = "white"{}
    23.         _PosTex2("position texture 2", 2D) = "black"{}
    24.         _NmlTex2("normal texture 2", 2D) = "white"{}
    25.         // _TgtTex("tangent texture", 2D) = "white"{}
    26.         _DT("delta time", float) = 0
    27.         _Length("animation length", Float) = 1
    28.     _AnimationTransition("animation transition", Float) = 1
    29.         [Toggle(ANIM_LOOP)] _Loop("loop", Float) = 1
    30.     }
    31.     SubShader {
    32.         Tags { "RenderType"="Opaque" "RenderPipeline"="UniversalPipeline" }
    33.        
    34.         HLSLINCLUDE
    35.             #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
    36.  
    37.             #pragma target 4.5
    38.  
    39.             CBUFFER_START(UnityPerMaterial)
    40.             float4 _BaseMap_ST;
    41.             float4 _BaseColor;
    42.             float _BumpScale;
    43.             float4 _EmissionColor;
    44.             float _Smoothness;
    45.             float _Cutoff;
    46.  
    47.             float4 _PosTex_TexelSize;
    48.             float4 _NmlTex_TexelSize;
    49.             float4 _PosTex2_TexelSize;
    50.             float4 _NmlTex2_TexelSize;
    51.             // float4 _TgtTex_TexelSize;
    52.             float _Length;
    53.       float _AnimationTransition;
    54.             float _DT;
    55.             CBUFFER_END
    56.  
    57.       #if defined(UNITY_DOTS_INSTANCING_ENABLED)
    58.       // DOTS instancing definitions
    59.       UNITY_DOTS_INSTANCING_START(MaterialPropertyMetadata)
    60.           UNITY_DOTS_INSTANCED_PROP(float, _AnimationTransition)
    61.       UNITY_DOTS_INSTANCING_END(MaterialPropertyMetadata)
    62.       // DOTS instancing usage macros
    63.       #define _AnimationTransition UNITY_ACCESS_DOTS_INSTANCED_PROP_FROM_MACRO(float, Metadata__AnimationTransition)
    64.       #endif
    65.  
    66.       sampler2D _PosTex;
    67.       sampler2D _NmlTex;
    68.       sampler2D _PosTex2;
    69.       sampler2D _NmlTex2;
    70.       // sampler2D _TgtTex;
    71.         ENDHLSL
    72.        
    73.         Pass {
    74.             Name "Example"
    75.             Tags { "LightMode"="UniversalForward" }
    76.            
    77.             HLSLPROGRAM
    78.  
    79.             // Required to compile gles 2.0 with standard SRP library
    80.             // All shaders must be compiled with HLSLcc and currently only gles is not using HLSLcc by default
    81.             #pragma prefer_hlslcc gles
    82.             #pragma exclude_renderers d3d11_9x gles
    83.  
    84.             #pragma target 4.5
    85.  
    86.             #pragma vertex vert
    87.             #pragma fragment frag
    88.            
    89.             // Material Keywords
    90.             #pragma shader_feature _NORMALMAP
    91.             #pragma shader_feature _ALPHATEST_ON
    92.             #pragma shader_feature _ALPHAPREMULTIPLY_ON
    93.             #pragma shader_feature _EMISSION
    94.             //#pragma shader_feature _METALLICSPECGLOSSMAP
    95.             //#pragma shader_feature _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A
    96.             //#pragma shader_feature _OCCLUSIONMAP
    97.             //#pragma shader_feature _ _CLEARCOAT _CLEARCOATMAP // URP v10+
    98.  
    99.             //#pragma shader_feature _SPECULARHIGHLIGHTS_OFF
    100.             //#pragma shader_feature _ENVIRONMENTREFLECTIONS_OFF
    101.             //#pragma shader_feature _SPECULAR_SETUP
    102.             #pragma shader_feature _RECEIVE_SHADOWS_OFF
    103.  
    104.             // URP Keywords
    105.             #pragma multi_compile _ _MAIN_LIGHT_SHADOWS
    106.             #pragma multi_compile _ _MAIN_LIGHT_SHADOWS_CASCADE
    107.             #pragma multi_compile _ _ADDITIONAL_LIGHTS_VERTEX _ADDITIONAL_LIGHTS
    108.             #pragma multi_compile _ _ADDITIONAL_LIGHT_SHADOWS
    109.             #pragma multi_compile _ _SHADOWS_SOFT
    110.             #pragma multi_compile _ _MIXED_LIGHTING_SUBTRACTIVE
    111.  
    112.             // Unity defined keywords
    113.             #pragma multi_compile _ DIRLIGHTMAP_COMBINED
    114.             #pragma multi_compile _ LIGHTMAP_ON
    115.             #pragma multi_compile_fog
    116.  
    117.             #pragma multi_compile_instancing
    118.             #pragma multi_compile _ DOTS_INSTANCING_ON
    119.  
    120.       #pragma multi_compile ___ ANIM_LOOP
    121.  
    122.             // Includes
    123.             #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
    124.             #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/SurfaceInput.hlsl"
    125.  
    126.             struct Attributes {
    127.                 float4 positionOS   : POSITION;
    128.                 float3 normalOS        : NORMAL;
    129.                 float4 tangentOS    : TANGENT;
    130.                 float4 color        : COLOR;
    131.                 float2 uv           : TEXCOORD0;
    132.                 float2 lightmapUV   : TEXCOORD1;
    133.         UNITY_VERTEX_INPUT_INSTANCE_ID
    134.             };
    135.  
    136.             struct Varyings {
    137.                 float4 positionCS                : SV_POSITION;
    138.                 float4 color                    : COLOR;
    139.                 float2 uv                    : TEXCOORD0;
    140.                 DECLARE_LIGHTMAP_OR_SH(lightmapUV, vertexSH, 1);
    141.                
    142.                 #ifdef REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
    143.                     float3 positionWS            : TEXCOORD2;
    144.                 #endif
    145.  
    146.                 float3 normalWS                    : TEXCOORD3;
    147.                 #ifdef _NORMALMAP
    148.                     float4 tangentWS             : TEXCOORD4;
    149.                 #endif
    150.  
    151.                 float3 viewDirWS                 : TEXCOORD5;
    152.                 half4 fogFactorAndVertexLight    : TEXCOORD6; // x: fogFactor, yzw: vertex light
    153.  
    154.                 #ifdef REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR
    155.                     float4 shadowCoord            : TEXCOORD7;
    156.                 #endif
    157.             };
    158.            
    159.             #if SHADER_LIBRARY_VERSION_MAJOR < 9
    160.             // This function was added in URP v9.x.x versions, if we want to support URP versions before, we need to handle it instead.
    161.             // Computes the world space view direction (pointing towards the viewer).
    162.             float3 GetWorldSpaceViewDir(float3 positionWS) {
    163.                 if (unity_OrthoParams.w == 0) {
    164.                     // Perspective
    165.                     return _WorldSpaceCameraPos - positionWS;
    166.                 } else {
    167.                     // Orthographic
    168.                     float4x4 viewMat = GetWorldToViewMatrix();
    169.                     return viewMat[2].xyz;
    170.                 }
    171.             }
    172.             #endif
    173.  
    174.             Varyings vert (Attributes IN, uint vid : SV_VertexID)
    175.       {
    176.                 Varyings OUT;
    177.                 UNITY_SETUP_INSTANCE_ID (IN);
    178.  
    179.         // _Time = Time since level load (t/20, t, t*2, t*3), use to animate things inside the shaders.
    180.         // float animationTransition = UNITY_ACCESS_INSTANCED_PROP(Props, _AnimationTransition);
    181.         float animationTransition = _AnimationTransition;
    182.                 float t = (_Time.y - UNITY_ACCESS_INSTANCED_PROP(Props, _DT)) / UNITY_ACCESS_INSTANCED_PROP(Props, _Length);
    183. #if ANIM_LOOP
    184.                 t = fmod(t, 1.0);
    185. #else
    186.                 t = saturate(t);
    187. #endif          
    188.                 float x = (vid + 0.5) * UNITY_ACCESS_INSTANCED_PROP(Props, _PosTex_TexelSize.x);
    189.                 float y = t;
    190.                 float4 pos = tex2Dlod(_PosTex, float4(x, y, 0, 0));
    191.             float3 normal = tex2Dlod(_NmlTex, float4(x, y, 0, 0));
    192.            
    193.                 float4 pos2 = tex2Dlod(_PosTex2, float4(x, y, 0, 0));
    194.             float3 normal2 = tex2Dlod(_NmlTex2, float4(x, y, 0, 0));
    195.  
    196.         IN.positionOS = lerp (pos, pos2, animationTransition);
    197.         IN.normalOS = lerp (normal, normal2, animationTransition);
    198.  
    199.                 VertexPositionInputs positionInputs = GetVertexPositionInputs(IN.positionOS.xyz);
    200.                 OUT.positionCS = positionInputs.positionCS;
    201.                 OUT.uv = TRANSFORM_TEX(IN.uv, _BaseMap);
    202.                 OUT.color = IN.color;
    203.  
    204.                 #ifdef REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
    205.                     OUT.positionWS = positionInputs.positionWS;
    206.                 #endif
    207.  
    208.                 OUT.viewDirWS = GetWorldSpaceViewDir(positionInputs.positionWS);
    209.  
    210.                 VertexNormalInputs normalInputs = GetVertexNormalInputs(IN.normalOS, IN.tangentOS);
    211.                 OUT.normalWS =  normalInputs.normalWS;
    212.                 #ifdef _NORMALMAP
    213.                     real sign = IN.tangentOS.w * GetOddNegativeScale();
    214.                     OUT.tangentWS = half4(normalInputs.tangentWS.xyz, sign);
    215.                 #endif
    216.  
    217.                 half3 vertexLight = VertexLighting(positionInputs.positionWS, normalInputs.normalWS);
    218.                 half fogFactor = ComputeFogFactor(positionInputs.positionCS.z);
    219.  
    220.                 OUT.fogFactorAndVertexLight = half4(fogFactor, vertexLight);
    221.  
    222.                 OUTPUT_LIGHTMAP_UV(IN.lightmapUV, unity_LightmapST, OUT.lightmapUV);
    223.                 OUTPUT_SH(OUT.normalWS.xyz, OUT.vertexSH);
    224.  
    225.                 #ifdef REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR
    226.                     OUT.shadowCoord = GetShadowCoord(positionInputs);
    227.                 #endif
    228.  
    229.                 return OUT;
    230.             }
    231.            
    232.             InputData InitializeInputData(Varyings IN, half3 normalTS){
    233.                 InputData inputData = (InputData)0;
    234.  
    235.                 #if defined(REQUIRES_WORLD_SPACE_POS_INTERPOLATOR)
    236.                     inputData.positionWS = IN.positionWS;
    237.                 #endif
    238.                
    239.                 half3 viewDirWS = SafeNormalize(IN.viewDirWS);
    240.                 #ifdef _NORMALMAP
    241.                     float sgn = IN.tangentWS.w; // should be either +1 or -1
    242.                     float3 bitangent = sgn * cross(IN.normalWS.xyz, IN.tangentWS.xyz);
    243.                     inputData.normalWS = TransformTangentToWorld(normalTS, half3x3(IN.tangentWS.xyz, bitangent.xyz, IN.normalWS.xyz));
    244.                 #else
    245.                     inputData.normalWS = IN.normalWS;
    246.                 #endif
    247.  
    248.                 inputData.normalWS = NormalizeNormalPerPixel(inputData.normalWS);
    249.                 inputData.viewDirectionWS = viewDirWS;
    250.  
    251.                 #if defined(REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR)
    252.                     inputData.shadowCoord = IN.shadowCoord;
    253.                 #elif defined(MAIN_LIGHT_CALCULATE_SHADOWS)
    254.                     inputData.shadowCoord = TransformWorldToShadowCoord(inputData.positionWS);
    255.                 #else
    256.                     inputData.shadowCoord = float4(0, 0, 0, 0);
    257.                 #endif
    258.  
    259.                 inputData.fogCoord = IN.fogFactorAndVertexLight.x;
    260.                 inputData.vertexLighting = IN.fogFactorAndVertexLight.yzw;
    261.                 inputData.bakedGI = SAMPLE_GI(IN.lightmapUV, IN.vertexSH, inputData.normalWS);
    262.                 return inputData;
    263.             }
    264.  
    265.             SurfaceData InitializeSurfaceData(Varyings IN){
    266.                 SurfaceData surfaceData = (SurfaceData)0;
    267.                 // Note, we can just use SurfaceData surfaceData; here and not set it.
    268.                 // However we then need to ensure all values in the struct are set before returning.
    269.                 // By casting 0 to SurfaceData, we automatically set all the contents to 0.
    270.                
    271.                 half4 albedoAlpha = SampleAlbedoAlpha(IN.uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap));
    272.                 surfaceData.alpha = Alpha(albedoAlpha.a, _BaseColor, _Cutoff);
    273.                 surfaceData.albedo = albedoAlpha.rgb * _BaseColor.rgb * IN.color.rgb;
    274.  
    275.                 // For the sake of simplicity I'm not supporting the metallic/specular map or occlusion map
    276.                 // for an example of that see : https://github.com/Unity-Technologies/Graphics/blob/master/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl
    277.  
    278.                 surfaceData.smoothness = 0.5;
    279.                 surfaceData.normalTS = SampleNormal(IN.uv, TEXTURE2D_ARGS(_BumpMap, sampler_BumpMap), _BumpScale);
    280.                 surfaceData.emission = SampleEmission(IN.uv, _EmissionColor.rgb, TEXTURE2D_ARGS(_EmissionMap, sampler_EmissionMap));
    281.  
    282.                 surfaceData.occlusion = 1;
    283.  
    284.                 return surfaceData;
    285.             }
    286.  
    287.             half4 frag(Varyings IN) : SV_Target {
    288.                 SurfaceData surfaceData = InitializeSurfaceData(IN);
    289.                 InputData inputData        = InitializeInputData(IN, surfaceData.normalTS);
    290.                
    291.                 // In URP v10+ versions we could use this :
    292.                 // half4 color = UniversalFragmentPBR (inputData, surfaceData);
    293.  
    294.                 // But for other versions, we need to use this instead.
    295.                 // We could also avoid using the SurfaceData struct completely, but it helps to organise things.
    296.                 half4 color = UniversalFragmentPBR (inputData, surfaceData.albedo, surfaceData.metallic,
    297.                                             surfaceData.specular, surfaceData.smoothness, surfaceData.occlusion,
    298.                                             surfaceData.emission, surfaceData.alpha);
    299.                
    300.                 color.rgb = MixFog(color.rgb, inputData.fogCoord);
    301.  
    302.                 // color.a = OutputAlpha(color.a);
    303.                 // Not sure if this is important really. It's implemented as :
    304.                 // saturate(outputAlpha + _DrawObjectPassData.a);
    305.                 // Where _DrawObjectPassData.a is 1 for opaque objects and 0 for alpha blended.
    306.                 // But it was added in URP v8, and versions before just didn't have it.
    307.                 // We could still saturate the alpha to ensure it doesn't go outside the 0-1 range though :
    308.                 color.a = saturate(color.a);
    309.  
    310.                 return color; // float4(inputData.bakedGI,1);
    311.             }
    312.             ENDHLSL
    313.         }
    314.  
    315.         // UsePass "Universal Render Pipeline/Lit/ShadowCaster"
    316.         // Note, you can do this, but it will break batching with the SRP Batcher currently due to the CBUFFERs not being the same.
    317.         // So instead, we'll define the pass manually :
    318.         Pass {
    319.             Name "ShadowCaster"
    320.             Tags { "LightMode"="ShadowCaster" }
    321.  
    322.             ZWrite On
    323.             ZTest LEqual
    324.  
    325.             HLSLPROGRAM
    326.             // Required to compile gles 2.0 with standard srp library
    327.             #pragma prefer_hlslcc gles
    328.             #pragma exclude_renderers d3d11_9x gles
    329.             #pragma target 4.5
    330.  
    331.             // Material Keywords
    332.             #pragma shader_feature _ALPHATEST_ON
    333.             #pragma shader_feature _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A
    334.  
    335.             // GPU Instancing
    336.             #pragma multi_compile_instancing
    337.             #pragma multi_compile _ DOTS_INSTANCING_ON
    338.      
    339.             #pragma vertex ShadowPassVertex
    340.             #pragma fragment ShadowPassFragment
    341.            
    342.             #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/CommonMaterial.hlsl"
    343.             #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/SurfaceInput.hlsl"
    344.             #include "Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hlsl"
    345.  
    346.             // Note if we want to do any vertex displacment, we'll need to change the vertex function :
    347.            
    348.             //  e.g.
    349.             // #pragma vertex vert
    350.  
    351.             Varyings vert (Attributes IN, uint vid : SV_VertexID)
    352.       {
    353.                 Varyings output;
    354.                 UNITY_SETUP_INSTANCE_ID (IN);
    355.  
    356.         // _Time = Time since level load (t/20, t, t*2, t*3), use to animate things inside the shaders.
    357.                 // float t = (_Time.y - UNITY_ACCESS_INSTANCED_PROP(Props, _DT)) / UNITY_ACCESS_INSTANCED_PROP(Props, _Length);
    358.                 float t = (_Time.y - _DT) / _Length;
    359. #if ANIM_LOOP
    360.                 t = fmod(t, 1.0);
    361. #else
    362.                 t = saturate(t);
    363. #endif          
    364.                 // float x = (vid + 0.5) * UNITY_ACCESS_INSTANCED_PROP(Props, _PosTex_TexelSize.x);
    365.                 float x = (vid + 0.5) * _PosTex_TexelSize.x;
    366.                 float y = t;
    367.                 float4 pos = tex2Dlod(_PosTex, float4(x, y, 0, 0));
    368.                 float4 pos2 = tex2Dlod(_PosTex2, float4(x, y, 0, 0));
    369.         IN.positionOS = lerp (pos, pos2, _AnimationTransition);
    370.  
    371.                 output.uv = TRANSFORM_TEX (IN.texcoord, _BaseMap);
    372.                 output.positionCS = GetShadowPositionHClip (IN);
    373.                 return output;
    374.             }
    375.  
    376.             // Using the ShadowCasterPass means we also need _BaseMap, _BaseColor and _Cutoff shader properties.
    377.             // Also including them in cbuffer, with the exception of _BaseMap as it's a texture.
    378.  
    379.             ENDHLSL
    380.         }
    381.  
    382.         // Similarly, we should have a DepthOnly pass.
    383.         // UsePass "Universal Render Pipeline/Lit/DepthOnly"
    384.         // Again, since the cbuffer is different it'll break batching with the SRP Batcher.
    385.  
    386.         // The DepthOnly pass is very similar to the ShadowCaster but doesn't include the shadow bias offsets.
    387.         // I believe Unity uses this pass when rendering the depth of objects in the Scene View.
    388.         // But for the Game View / actual camera Depth Texture it renders fine without it.
    389.         // It's possible that it could be used in Forward Renderer features though, so we should probably still include it.
    390.         Pass {
    391.             Name "DepthOnly"
    392.             Tags { "LightMode"="DepthOnly" }
    393.  
    394.             ZWrite On
    395.             ColorMask 0
    396.  
    397.             HLSLPROGRAM
    398.             // Required to compile gles 2.0 with standard srp library
    399.             #pragma prefer_hlslcc gles
    400.             #pragma exclude_renderers d3d11_9x gles
    401.             #pragma target 4.5
    402.  
    403.             // Material Keywords
    404.             #pragma shader_feature _ALPHATEST_ON
    405.             #pragma shader_feature _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A
    406.  
    407.             // GPU Instancing
    408.             #pragma multi_compile_instancing
    409.             #pragma multi_compile _ DOTS_INSTANCING_ON
    410.            
    411.             #pragma vertex DepthOnlyVertex
    412.             #pragma fragment DepthOnlyFragment
    413.            
    414.             //#include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
    415.             // Note, the Lit shader that URP provides uses this, but it also handles the cbuffer which we already have.
    416.             // We could change the shader to use their cbuffer, but we can also just do this :
    417.             #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/CommonMaterial.hlsl"
    418.             #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/SurfaceInput.hlsl"
    419.             #include "Packages/com.unity.render-pipelines.universal/Shaders/DepthOnlyPass.hlsl"
    420.  
    421.             // Again, using the DepthOnlyPass means we also need _BaseMap, _BaseColor and _Cutoff shader properties.
    422.             // Also including them in cbuffer, with the exception of _BaseMap as it's a texture.
    423.  
    424.             ENDHLSL
    425.         }
    426.  
    427.         // URP also has a "Meta" pass, used when baking lightmaps.
    428.         // UsePass "Universal Render Pipeline/Lit/Meta"
    429.         // While this still breaks the SRP Batcher, I'm curious as to whether it matters.
    430.         // The Meta pass is only used for lightmap baking, so surely is only used in editor?
    431.         // Anyway, if you want to write your own meta pass look at the shaders URP provides for examples
    432.         // https://github.com/Unity-Technologies/Graphics/tree/master/com.unity.render-pipelines.universal/Shaders
    433.     }
    434. }
     
    elJoel and florianhanke like this.
  9. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Thanks jdtec! I'll will definitely look at it tomorrow.

    I found out that there are a whole bunch of new features in the Animation Texture Baker (ATB) repo under the dev branch. It actually has examples with ECS code. Unfortunately, the majority of that code is written with a really old version of entities - like even before the proxy stuff, which is now obsolete. So it won't compile without rewriting most of it.

    I started a new project and extracted the basic baking code and got it running. It combines all the animations so that it creates 4 files - a mesh, a material, a normals file, and a positions file (the old one created 3 times more files because it separated all the animations). So that seems cool that it combines the animations (the generated texture files are about twice the size of the old ones). One gotcha is that the shaders are not URP compliant. I attempted to fix the unlit one to make it URP compliant, and it no longer complains about incompatibility when I view the shader in the inspector. Unfortunately, it produces the pink circle of death.

    upload_2020-12-18_3-0-59.png
    upload_2020-12-18_3-12-1.png

    I'm not sure why it doesn't work. It may be that this new shader requires some systems running or something... or maybe something needs to be tweaked in the shader... or who knows. I've thrown an entire day at it, but no luck.

    On a totally unrelated note, the shaders in the Animation Texture Baker repo are very similar to this youtube tutorial:
    . One of them definitely was borrowing code from the other.

    Anyway.... it's 3am so I'm going to sleep.
     
    florianhanke and jdtec like this.
  10. elJoel

    elJoel

    Joined:
    Sep 7, 2016
    Posts:
    125
    You're the man, thanks a lot. Did some quick tests and works great so far.
     
    jdtec likes this.
  11. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I played around with jdtec's URP shader today. It works! It's limited to only two animations and requires the generation and management of a lot of files, so it's not the ultimate solution, but it's better than what I had before which was nothing :)

    I am still playing around with the dev branch of Animation Texture Baker. It's much more advanced than the master branch because it handles multiple animations really well, has some fancy baking editor tools, and is specifically geared towards ECS. I can tell that zulfajuniadi put a lot of effort into it. If I could just get that shader to stop being pink, it would be an excellent solution!

    Here's what the baker tool looks like:

    upload_2020-12-18_16-52-43.png

    And the animation graph tool:

    upload_2020-12-18_16-53-53.png
     
  12. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    You need to use the dev branch.
     
  13. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I've been pretty busy over the past couple of days. I modified the Animation Texture Baker (ATB) baking code so that it encodes multiple animations per file, and then I modified the shader so that it can playback multiple animations.

    In some ways it's cool because it reduces the number of files and provides an easy way to switch the animations. For example, in a model with 10x animations, each animation needs 3x files (material, normal, and position) - for a total of 30 files. In my new baking/playback process, only 3 files are needed (a material, a normal file, and a position file).

    So this "new method" sounds really good right? There is only one gotcha that I didn't see coming. The textures can be a bit larger. It comes down to the restriction that texture dimensions must be a power of 2. With the old method, having lots of small textures also wastes space, but not as much. For the new method, in some cases the extra frames will knock the height dimension up by a power of two and suddenly there's a bunch of wasted space.

    I was able to use a simple encoding/decoding conversion for the normals so I could encode the normals in an RGB24 buffer instead of an RGBAHalf buffer, so that cut the normal texture size in half.

    Also, I've been ignoring the tan files for now... I'm assuming tan is short for tangent, but I'm not really sure what they're used for. I don't think it would be too difficult to add them later if needed.

    Using the horse model with 3 animations at 20fps, I calculated that the "old method" textures total 655,360 pixels and take 5,150KB on disk. The "new method" uses 524,288 pixels and 6,148KB on disk. That means the new method uses about 16% more memory. If I hadn't used the RGB24 buffer for normals, it would have been 25% larger. Man, that power-of-2 rule for textures really sucks!!!

    Now I see why some people do the bone baking technique instead. The only reason I haven't tried the bone baking technique is that I wasn't able to find a good working example that could easily be used with ECS.

    If I can figure it out, I will try to implement bilinear filtering discussed in this paper: https://medium.com/tech-at-wildlife-studios/texture-animation-techniques-1daecb316657 . In theory, that would allow most animations to be baked at a much lower frame-rate, and save a bunch of space. I'm not sure if I'll be able to pull it off though... shader coding is tough.

    Edit: I originally messed up my calculations - I didn't know that ATB had hardcoded 20fps into the baking code, so I was comparing the new method's 30fps textures against the old method's 20fps textures. After discovering that, I modified the numbers above so that both methods are using 20fps. I did notice that 20fps didn't look much different than 30fps, so depending on the game, you might be able to get away with 20fps. I'm pretty sure that's what I'll be doing.

    Old method.....
    upload_2020-12-20_23-34-12.png

    New method....
    upload_2020-12-20_23-35-2.png
     
    Last edited: Dec 23, 2020
    elJoel and florianhanke like this.
  14. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Have any of you guys tested the Crown Animations package for the GPU Instancer asset ( https://assetstore.unity.com/packages/tools/animation/gpu-instancer-crowd-animations-145114 )? I asked the author if it could be used on entities instead of gameobjects and he said that it comes with an example of how to do that (unlike Mesh Animator or GPU Animation Baker, which would require a lot of development for use without gameobjects).

    I'm tempted to buy it... especially since it's 1/2 off right now. It's pretty expensive though - it requires the purchase of two products so the total cost plus tax even after the discount is around $70.

    Edit: I just realized that it doesn't support mobile platforms.... so much for that idea :-(
     
    Last edited: Dec 21, 2020
    florianhanke likes this.
  15. elJoel

    elJoel

    Joined:
    Sep 7, 2016
    Posts:
    125
    Maybe you could try splitting up the longer animations into two or even three, in order to save texture space.

    That new method sounds awesome, any chance you could upload it?
     
  16. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I found out some good news - the texture size calculations I was doing weren't correct. For the "old method" I was using the Animation Texture Baker's code, but I didn't realize that it was reducing the frame rate - there is a hardcoded value in ATB's baking code that forces it use 20fps, whereas my baking code was automatically matching the fps of the clip, which is 30fps. So it wasn't an apples-to-apples comparison. I will re-run the numbers with matching fps, but the good news is that the texture sizes are probably going to be much more competitive.

    Also, I have been experimenting with different frame rates and I can hardly tell the difference between running a baked clip at 20fps and one at 30fps, so I think just reducing the frame rate will help a lot, and if we can figure out how to do some interpolation, we could reduce texture sizes even further.

    My code isn't compiling at the moment - I'm fixing a bug with the first frame, but I think I'll start a new repository and begin pushing to that - probably sometime tomorrow or maybe the day after.
     
    elJoel likes this.
  17. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I pushed up a new repo that has all the texture-baking animation stuff I've been working on.

    https://gitlab.com/lclemens/animationcooker

    I got linear interpolation working! It makes an 11fps bake look comparable to a 30fps bake, at 1/4 the size. I also made an editor window with a simplified baking process. The baking code now generates a C# static class that contains all the animation clip data (for all models). After baking, a prefab is created that has all the necessary ECS component data attached in order to switch animation clips by setting per-instance shader properties.

    One thing that wasted 12 hours of my time was having a Unity bilinear filter turned on in texture settings. It defaults to enabled, but I didn't even know that setting was there. It was causing my first and last frames to be corrupted so I spent forever going through every inch of the shader and baking code. I was so pissed and happy at the same time when suddenly it all cleared up after checking one little checkbox! Another thing that got me - I spent a long time trying to figure out how to debug shader code (tried RenderDoc and the VS graphics debugger), but in the end I gave up empty handed.

    Unfortunately, there is one BIG problem that I've been stuck on all day. The per-instanced data doesn't work the way it's supposed to.

    In the spawner class, I do a batch spawn and then modify the per-instance properties for each entity so that they each get a different animation and color (cycling through the animations). So one would think that I'd get a scene where each colored horse plays the same animation as other horses of the same color... but instead I get this weird mess instead!

    instancing_problem_ffmpeg.gif

    Clearly the instanced colors work fine. Why does it work for colors but not animations? As for the animations... they're clearly not matched up with the colors. Also, they should never switch animations during gameplay - I fixed them at spawn, and they should be permanent. And now for the really freaking weird part... the animation that plays depends on the position and orientation of the virtual camera. Uhhhhhhh.. W... T... F???!!!! What does the camera position and orientation have to do with anything??!!

    When I look at the entities in the inspector during play with the Entity Debugger, I can see that the BeginFrame and EndFrame don't change, and that the indexes are unique per entity (some are set at clip 1, others at clip 2, etc). That is what I expected. If I modify the spawner+authoring scripts so that the entities don't have BeginFrame or EndFrame, then everything works as expected (of course without individual animations). Another way to "fix" it is to change the spawner code so all spawns use the exact same animation, (of course there are no individual animations there either).

    The spawner code looks something like this:

    Code (CSharp):
    1. EntityManager.SetComponentData(entity, new MaterialBeginFrame() { frameIndex = collection[clipIdx].beginFrame });
    2. EntityManager.SetComponentData(entity, new MaterialEndFrame() { frameIndex = collection[clipIdx].endFrame });
    3. EntityManager.SetComponentData(entity, new Unity.Rendering.MaterialColor() { Value = f4color });
    Where MaterialBeginFrame and MaterialEndFrame are defined like:

    Code (CSharp):
    1. [MaterialProperty("_EndFrame", MaterialPropertyFormat.Float, -1)]
    2. public struct MaterialEndFrame : IComponentData
    3. {
    4.     public float frameIndex;
    5. }
    Which is identical to the way MaterialColor is defined in Unity's hybrid rendering package.

    Here is the shader code: https://gitlab.com/lclemens/animati...Cooker/Resources/CookedAnimationPlayer.shader . I'm still a noob @ HLSL so there's a chance I messed something up in there... Although I really don't think there's anything in there related to the camera position/rotation.

    The problem occurs in HRV2. When I run with HRV1, the horses don't even render, (but I can see them in the entity debugger). I haven't tested with the old standard render pipeline.

    Whatever is going on... I'm confident that it has something to do with the per-instance shader properties.

    I don't know.... I'm really stuck. I can't tell if it's a bug in my shader code, or if it's a bug in Unity's per-instance shader properties.
     
    cultureulterior and elJoel like this.
  18. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    I've just quickly scanned your shader and have a couple of suggestions but no guarantees that it will help, just guesses(!):

    1) UNITY_SETUP_INSTANCE_ID(v); Move to the top of the vert function - maybe this needs to be setup before you access the instanced variables?

    2) Try changing half _EndFrame and similar to float _EndFrame. I really am guessing on that one though. If it helps it helps. Try 1) first.

    I recently noticed my instanced properties were all messed up because I had an error in my shader that was stopping them working. I also observed the random flickering and changing from moving the scene view camera around.
     
    lclemens likes this.
  19. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Holy *&$^#%@! jdtec!!!!!!!!!!!!!!! YOU ARE DA MAN!! Flickering is completely gone. I could hug you right now!!

    instancing_problem_fixed.gif
     
    Last edited: Dec 31, 2020
  20. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    That's great! I know that feeling of finally tracking down those shader bugs. :)
     
    lclemens likes this.
  21. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I was playing around with a bunch of different models last night. On the first one I tried, it kept baking at a 90 degree angle. The only way I could get it to work was to do what zulfajuniadi did and reset the origins and rotations on the meshes before baking. It worked! I tested two more models and they worked just fine.

    Then I tested 2 old models and no matter what I did, I couldn't get them to work. They would appear several meters away from their pivot points. One had the strange effect of disappearing as soon as the camera got near to it. The other one was reporting a frame rate of 1fps so the animations were all bonkers.

    One question I keep having is, what is the max texture size for mobile devices? I researched it several times, but all the information I can find talks about square textures. It seems like 2048x2048 is the consensus, but no one mentions rectangle textures. A 4096x16 texture uses only 2% of the memory that a 2048x2048 texture does... so is the limitation based on total memory, or actual width and/or height dimensions?
     
  22. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    156
    That sounds like an issue with the bounding box of the renderer. If you are offsetting the vertices too much from the origin, they are appearing way outside where the renderer says they are supposed to be so the culling is incorrect.

    The minimum required value for maximum texture size is 2048 for GLES 3.0 so the limitation is based on a specific numerical limit. This is up from only 64 in GLES 2.0. In practice, most devices anyone would be using for gaming will very likely support 8096, but you'd have to query it manually on the device to check before attempting to create a texture that large.

    As an aside, I think you are making things a bit more difficult for yourself by disabling interpolation on the texture and then manually interpolating in the shader. Texture hardware does that for free very efficiently. Set texture wrap settings to clamp, keep bilinear interpolation on, and the texture sampler will interpolate for you. Just add your lerp amount to the texture coordinates you are sampling from.

    The easy solution to avoid messing up your first and last frames it to duplicate them in the texture where your different animations are next to each other so the interpolation picks up the same frame twice (blending it only with itself).

    Oh, and the +(0.5, 0.5) in the shader is just to move the sampling position from the corner of the texels to the center so when you sample an interpolated texture, you are not getting any (or, given floating point error, very little) contribution from any neighbouring texels.
     
    lclemens, jdtec and florianhanke like this.
  23. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    The original texture baker code happened to encode an animation per a row in the texture (likely for simplicity). This leads to it tending to create long thin textures.

    If square textures are required you could change the baker so it laid out animation frames across multiple rows and then provide animation texture start/end offsets. This would probably save texture space generally too with the sacrifice being a slight additional code complexity.
     
    lclemens likes this.
  24. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701

    You were right about the bounding box - I played around with the bounding box a little bit and got that model working. I don't know why, but the way the model was built, it imports such that it is tilted up about 30 degrees and the bounding box is also tilted and both objects were offset quite a bit from the origin and rotated. It's working fine now though - Thanks!!

    I guess I'm still unclear about the texture sizes and what you mean when you say: "The minimum required value for the maximum texture size". Are they talking about the "width dimension", or do they mean the texture size can't be larger than 4194304 pixels (2048x2048)? If it's 4194304 pixels, then that means I'll have no problem using baked textures with large vertex counts like 65536x64 or something (which is the equivalent in pixel count to 2048x2048). However, if it's the "largest dimension" (as in width or height), then that texture size of 65536x64 would be no bueno.

    Thanks for the info on the +(0.5, 05) in the shader - that makes perfect sense!

    As for the interpolation, I don't know which is more difficult... having to put in extra frames and borders (and then offset all the clip indexes), or doing the linear interpolation myself (it was only 6 lines of code - that took me a day to write because I suck lol!). I think bilinear means sampling top/bottom and left/right pixels, but wouldn't interpolating horizontally (left/right) be bad in this case because we don't want to interpolate with a completely different vertex? I couldn't find an option in Unity's built-in stuff to interpolate only the vertical direction... the only setting I found was a checkbox to enable/disable the filter.

    On a side-note... I think it would be pretty easy to use one line of the texture for the purpose of storing animation clip information, which would make it easier to set the current clip from code. That way I could just set the clip index instead of setting two parameters (_BeginFrame and _EndFrame). I could also have the C# code generated by the cook function create enums for each clip... so it would be as simple as telling the shader to play Horse.Walk or Horse.Idle. That is next on my todo list.
     
  25. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I did read one paper where a guy used that technique - he was wrapping clips to fill up the square. For now, I don't think I'll mess with it though. Because of the ability to do a little bit of frame interpolation, the AnimationCooker does a pretty good job of presenting the optimum frame rates for a specific texture size to maximize the available space. There could be a benefit to wrapping though if it turns out that there is a maximum "longest dimension" on textures... in that case, wrapping would allow models to have more vertexes. I still haven't figured out whether this limit people are always talking about is a "maximum longest dimension" or a maximum size in bytes.
     
    jdtec likes this.
  26. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    156
    It is the largest numerical texture dimension for one side of the texture. GLES 3.0 is guaranteed to support at least 2048 x 2048. It's fairly common to support a maximum dimension of 8192 (made a typo last time) on anything being used for gaming so you can usually use textures up to 8192 x 8192, but it's possible you will run into devices that don't support it.

    Edit: And, just to be completely clear, what I mean that both dimensions have to be 2048 or smaller to be absolutely safe, but if you had to, you could go up to 8192 for one dimension and it would likely be reasonable to simply exclude devices that didn't support that.

    As long as you sample from the center of the texel you shouldn't have issues from adjacent vertices bleeding over. I would also lean towards letting the texture sampler do the interpolation because, I would suspect, the specialized texture hardware is going to be faster than doing it manually. But you have the manual interpolation implemented already so certainly stick with that unless you want to experiment for optimization purposes.

    Yeah, I was going to mention that I would probably use texture arrays to make it easier to index animations with a simple integer and be able to change animations without having to bind a separate texture. And it may not be a bad idea to still use a second single-channel texture that would map normalized animation time to the proper baked animation texture coordinate. Would allow for non-linear encoding of the animation with respect to time for more/less detail in certain areas of the animation, or possibly only baking out key frames to save memory.
     
    Last edited: Jan 2, 2021
    lclemens likes this.
  27. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Thanks! I thought that might be the case, but I couldn't find a definitive answer. I wish it was based on memory instead of dimension, but I guess it is what it is :)

    I put in the extra line to hold clip info and changed the C# generator to generate enums and it works quite well. Now I can write lines like:

    Code (CSharp):
    1. EntityManager.SetComponentData<MaterialClipIndex>(entity, new MaterialClipIndex { clipIndex = (float)AnimDb.Horse.Idle });
    (Or change it in an Entities.ForEach()).

    I never thought about doing non-linear animation encoding with respect to time... To be honest I haven't ever really spent the time to animate a model, and since all the models I've downloaded so far have animations that are fixed at a flat fps, it never crossed my mind to try that. Do some people make animations with a "variable fps?" If so, I suppose I could use that info to vary rate during the bake and store it in a secondary texture like you suggested. If not, then I'd have to create a GUI interface that I could use to tweak the fps throughout the clip. Or perhaps there's a way to analyze the frequency and vary it automatically based on that. I don't think I want to tackle that one right now though.

    Doing the vertex wrapping that jdtech was talking about (as in using multiple lines for meshes with lots of vertexes), is on my wish-list, but I probably won't get to it this month. It seems like there are a lot of limitations with the animation baking technique compared to traditional animation, so baking is best for cases where there are craploads of low-poly things to animate. I had to decimate a couple of my meshes to get them to fit into the 2048 limit. I have just a handful of things on my todo list and then next week I'm going to switch gears from developing AnimationCooker into using it for my game.

    Earlier in the day I ran a quick benchmark for the interpolation that I wrote. I ran 100,000 instances of the horse model (1890 vertices, 3 animation clips, baked at 11fps). With interpolation disabled, I was getting 13fps on my laptop. With it enabled, I was getting 13fps. The delta-time was 76.8ms with interpolation and 74.9ms without it. Then I tested the hardware-based bilinear interpolation and it ran around 74.9ms. If it was working, then that means their bilinear interpolation is so fast it doesn't even register as a tenth of a millisecond. I'm not so sure if it was working though.... I selected it from the combo-box like this:
    upload_2021-1-3_4-50-4.png
    But the animation looked horrible even with only one model in the scene - it was still really jerky... it looked liked 11fps... as if the bilinear interpolation wasn't even enabled. Also, I would have expected the model to have artifacts on the first and last frame, but it there were no such issues.

    On a separate topic... After I added the dots per-instance stuff to the shader, this box started displaying in the shader properties:
    upload_2021-1-3_4-56-12.png
    But checking it doesn't seem to do anything at all.
    upload_2021-1-3_4-58-18.png
    I would have guessed that "Saved by batching" would have changed, or the frame rate would have improved, but neither happens. Those 26 batches are for 10,000 models though... so I'm thinking the GPU is batching them but they're just not getting registered as batched.

    FYI - if there are any feature you guys want to add, feel free to branch or fork AnimationCooker. I documented more than normal because I was learning a lot of new stuff... so while it's not the cleanest code, at least you can read what I was attempting to do :D.
     
    florianhanke likes this.
  28. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    156
    I was thinking in terms of memory efficiency. Animations typically have key frames for bones at specific times and then interpolates between them, so if you bake out an animation at a fixed rate, you are baking out a lot of redundant information: all the intermediate results of an interpolation which you could easily incorporate into the interpolation you are already doing at runtime anyway.

    The simplest approach would be to bake out a frame whenever you had a key frame in the original animation which would save a lot of memory and shouldn't reduce the quality of the final animation. You would need a second single-channel texture to map the animation time to the correct position in the animation texture, but that would be fairly small and more than offset by reducing the size of your main animation texture.

    It is not a huge surprise that interpolation is not a huge performance hit because you are reading a relatively large amount of texture/vertex data and doing a relatively small amount of work to them so you are probably bandwidth limited. Which is why reducing the animation texture size may be helpful.

    I think your hardware interpolation is not working because you don't want to floor your frame when you are not interpolating manually. You need that fractional component of the frame to make it to the texture coordinate when you want to sample an intermediate position.
     
    lclemens and florianhanke like this.
  29. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Hmm... that variable-frame idea is starting to sound more interesting. I wonder if it could be done using the position texture since it's ARGBHalf, and currently only the RGB is being used. That leaves a 16 bits that are just sitting there not being useful. I dug through that Horse model's animation and it is as you say - each bone has a different number of keyframes - some like the head and hips have only 2 or 3 keyframes, but others, like the legs, have a keyframe almost every sample. I still don't have a clear vision on exactly how I'd implement it, but I'll chew on it over the next couple of days.

    It's starting to sound more similar to Arathorn_J's approach - instead of baking vertex data to texture, he bakes bone data instead. There are a few repos that use that technique - but they're all several years old, they are built around monobehaviours attached to objects, and they don't support URP or HRV2. It's awesome for saving memory though. One pitfall is that it does more work on the GPU so Arathorn_J had to resort to dirty tricks with LOD and using less bones on distant characters.

    I think you're right about the animation texture size being the biggest bottleneck. I tested a model that has only 661 vertices instead of the 1890 vertices on the horse model (both baked with 60f total). That upped the frame rate from 13fps to 48fps with 100k models (software interpolation still only affects the end result by 0.02%).

    Back when I was using zulfajuniadi's project with just one animation and one material per model, I was getting 20fps with the horses but now I'm only getting 13fps. That's with only a single animation. I think something I did in the shader caused the slowdown, but I'm not sure what it was. Maybe it was adding the DOTS per-instance stuff? When interpolation is disabled, the two shaders don't really look that much different.

    Here's the 20fps one that a guy named Joe Rozek wrote (I converted it to work under URP)... https://github.com/sugi-cho/Animati...eAnimPlayer_Unlit_Diff_GpuInstance_URP.shader

    And here's my 13fps one... https://gitlab.com/lclemens/animati...Cooker/Resources/CookedAnimationPlayer.shader
     
    florianhanke likes this.
  30. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    156
    Yeah, it will depend on what your animations are like. The naive solution is to bake it out for every key frame, but that won't be a big help for everything.

    Beyond that I could imagine doing an error calculation for the interpolated animation versus the ground truth for each baked frame, and then removing frames with the least amount of error until you either hit a maximum error threshold, or can fit inside a given memory constraint.

    Something to think about, definitely, because it is probably the memory use which is hurting you right now.
     
    lclemens likes this.
  31. nyanpath

    nyanpath

    Joined:
    Feb 9, 2018
    Posts:
    77
    You should only need to bake a few select keyframes then interpolate with any kind of fitting curve (usually S) for most animations. A regular walk cycle could be done with just two keyframes (mirrored for the other side, done in code). If the walk cycle was split up into arms and legs pairs, you could add an additional scripted offset for movement to make it look even more natural with less image data. I really wish I had time to contribute to this because I have a lot of things I have looked into that could possibly help/improve.
     
    lclemens likes this.
  32. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    It took me a while to figure out what you meant by that :) You were right... I removed the "floor" function in the shader and changed a few ints to floats and then the bilinear interpolation started working. The timing went from 77.5 with it enabled to 76.0 with it disabled, so it might be just a tiny fraction faster than mine (1.5ms difference vs 1.9ms difference), although I was just eyeballing it... the delta rates are slightly noisy so the decimal portion isn't very accurate.
     
  33. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Something like that would definitely shrink the size by a lot. For this particular project I'm trying to make it generic so it works with any animation (attack actions, tree swaying, etc).

    On that horse animation, each bone had different keyframes (the legs had more, hips only had a few), but I'm pretty sure that if you take all the bones into account, at the 30fps, there's at least one keyframe at each interval for one of the bones. I think in that case, baking only the keyframes wouldn't help. So I think the implementation of what you're suggesting could be something like: modify the horse model so that it has only a few keyframes for walking... and then change the baking code so that it only bakes keyframes and it interpolates with S-curves. Is that correct?

    Part of me still wonders if I'd be better of just baking the bone transforms and ignoring vertex positions all-together and doing what Arathorn_J and a few other people have done (which has a different set of problems). I would love to find a working example that I could play around with to compare the two methods, but every repo I've found that uses that method, (including Unity.GPUAnimation), is several years old and broken under modern Unity.

    There is another technique that's interesting... there's an asset for sale called "Mesh Animator". It has two modes: http://jacobschieck.com/projects/meshanimator/documentation/documentation.html#sections . It looks to me like "Shader mode" is basically what I'm doing right now. "Snapshot Mode" is a strange one... they bake a mesh's vertex positions and then swap them out at runtime (all through the CPU), but since the baked mesh is a regular mesh instead of a skinned mesh, it can be instanced. Pretty interesting idea. I would buy it, but the author said informed me that everything is based on gameobjects so it would be a lot of work to use it with ECS.
     
  34. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    What's the deal with ARGBHalf support on phones???

    I was doing some reading and from what I can tell, ARGBHalf buffers are fully supported by any graphics card from within the last 10 years or so, but mobile platforms didn't start supporting them until much later. I *think* gles 3.2 and up? And even then, there seems to be questions about whether certain phones claiming floating point buffer support are truthful in their claims. My goal was gles 3.1 and up. Also some apparently can read them but not write, (which is fine since the technique I'm using doesn't require writing). Some people were saying that ARGB float was running much faster than ARGBHalf on certain phones. It seems the situation related to the support of ARGBHalf was a giant mess for a few years.

    So if I'm going to support this on older phones, does that mean I'm going to need to start stuffing these positions into an ARGB32 buffer instead? If so, that means I'll need to stuff the 64 bytes I have now into 32 bits (either by doing some dirty tricks to get 10.66 bits precision, or ignoring the alpha and only getting 8 bits). If 8 to 10.66 bits (0-255 or 0-1,618) is not enough precision per vertex component and causes jitter, then I'll either need to come up with a scheme to use multiple pixels per vertex sample, or use a second texture. Or maybe I'm wrong and old phones do support ARGBHalf?
     
    Last edited: Jan 6, 2021
  35. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I added support for ARGB32 by packing pixels as R11G10B11 (aka X11Y10Z11). So X and Z get 11 bits of precision and Y (usually height) only gets 10. I put a checkbox on the GUI to switch between floating point and integer texture types.

    It works pretty well... I haven't noticed any jitter and the memory allotment of the position texture is reduced by half. Also, it should work on older phones that don't support ARGBHalf... although I was told in another thread that most (but not all) phones over GLES 3 will support reading half float buffers.

    After some testing, there wasn't a noticeable performance difference... the frame rate with 100k entities stayed the same (on my laptop anyway). There are slightly more commands to execute in the shader due to decoding the floats, but that didn't seem to hurt. The reduction in memory made no difference.

    I got my a** kicked by a colorspace issue... I spent 16+ hours trying to figure out why the numbers I encoded weren't being read properly in the shader and the reason turned out to be because I needed to switch the texture colorspace from sRGB to linear. bgolus figured it out after I posted a question in the shader forum. On the bright side; I learned how to use RenderDoc to debug shaders and a few other useful things :)

    I'm currently having a lot of issues with getting the baked vertices to match the "baked" mesh. Out of 4 different models, only that horse model actually matches the vertices and the mesh after baking. The other ones either end up super huge or super tiny (by factors of 40 to 100). A couple of them have their meshes flipped 90 degrees (but the vertices draw unflipped). Some have pivot points that are offset. I tried using skinRenderer.transform.TransformPoint() to convert the models to world-space first but that didn't really solve the issue.
     
  36. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I figured out the scaling/rotation/translation issue. I couldn't figure out a way to automatically determine if a mesh should have its parent object's local rotation or position zeroed before baking. It seems to depend on the model and how the objects are nested. So I added two checkboxes on the GUI for toggling the zeroing of rotation and/or position. It works pretty well and I tested with 4 different models. One of the models was one that I had been unable to get working previously, but now it works perfectly.

    This morning I ran a quick test with 4 models, each cycling through 5 animations. Two had ~1300 vertices, one had 661, and the other had 2010 vertices. They were all baked into 32 bit R11G10B11 textures at around 10-12fps, (interpolation made them look just as good as the 30fps versions). My laptop got around 32fps for 100k instances. That is with an unlit shader. Making a simple lit one is next on my list.... Hopefully it won't dent frame rates too heavily.
     
  37. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Hey, how are you guys handling animation changes and state? For example: changing an animation in code from idle to a single attack, waiting for it to finish, and then changing it back back to idle? Is your animation loop in C# or the shader? For now I'm not even trying to do blending... I just want to be able to execute run-once animations and revert back to the default when the animation is over.

    Idea #1
    I originally had the animation loop in the shader. As far I can tell, there's no way to send an event from the shader to let you know when the animation is done. So I setup a System with an OnUpdate function that tripped the animation and waited for the number of seconds that the clip in theory should take, and then switched back to the idle animation. That didn't work very well... I think the clock in the GPU isn't synchronized perfectly with the CPU clock so there was some slop at the end of the clip.

    Idea #2
    So then I moved the animation loop out of the shader into a System. At every update, I would send an instanced time parameter to every animated entity - as in the current clip time. This seems to work okay. It dropped my fps with 100k entities from 20fps to 19fps. I suspect setting an instanced material property every frame takes a slight toll. I read a thread in these forums where someone said that it's really slow on mobile, but I haven't tested it yet since.

    Idea #3
    Another idea I've been pondering is doing the loop inside the shader with more state logic in the shader as well. So to change animations I would set two properties - one for the index of the animation clip to swap to, and a second that would be an integer-based "command" to specify play-once-and-go-back-to-idle, play-once-and-stop, etc. I wouldn't get too fancy - nothing like mechanim or anything, but just enough so that it could suffice for a game that has thousands of "minions" that need to switch around between the basics like idle, run, attack, die, etc.

    I'm not sure if #3 is an overkill... it seems like putting an "Animator" into a shader might make it difficult to change and modify later on. On the other hand, it would make it less dependent on ECS - there would be no need to run any special systems. I could just drop the shader in any project (ECS or classic Unity) and be able to change animations by setting properties without any glue. Also, I'm a little concerned about the performance of setting per-instance properties every frame on mobile devices. Are there any approaches that I missed? Suggestions?
     
  38. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    I've gone with a solution similar to #2 for my actual characters in-game. I want the animation state controlled from game code in my case. It probably depends how complex the thing that is being animated is.

    I've rolled a similar solution in the end to Joachim's (baked bone transforms skinning on GPU) except it works with Hybrid2 instanced properties.

    It was interesting to do. I put upper/lower body bone masking into the shader too so I can blend more combinations of animations.

    I don't think there's a right or wrong answer with a lot of this stuff, you have a gamut of possible solutions between baking actual vertices into a texture with very limited flexibility and/or having a full-on Morpheme (and whatever the Unity equivalent is) style state machine with lots of flexibility and control etc.
     
    lclemens likes this.
  39. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Last night I decided to go with #2 for now and it seems to be working well. I don't know if I'll get blending going - it's vertices only, and vertex only blending might not look that great anyway.

    That's pretty cool that you got the baked bone transform GPU skinning going. Do you have it on a public repo somewhere? It seems like the majority of people in this thread switched to that method. From what I read it uses way less memory, but harder on the GPU processor. Did you have to do a lot of stuff with LODs and reducing bone counts to get similar performance?
     
  40. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    296
    Yea I did plan to put it on a public repo in the future, just need to find some time.

    I haven't profiled it much yet. I imagine it will be slower than the baked vertices, harder on the GPU as you say. The characters are quite simple and do have low bone counts, say about ~20 I think. I don't do anything with LODs atm as they're simple meshes anyway.

    I only need to render hundreds to thousands of characters (not 10k+) so don't need absolute bleedin' edge speed. I even checked out DOTs animation first but I thought it was easier and more stable to just roll my own than use it.
     
    Last edited: Jan 20, 2021
    lclemens likes this.
  41. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    I read this article title about UE5 yesterday "Game dev uses next-gen technology to create millions of dancing crabs" https://www.polygon.com/2021/5/27/2...ons-unreal-engine-5-millions-of-dancing-crabs .

    My first thought was: "Damn it, I should have waited and used UE5 instead of Unity!" And then my second thought was: "I'm surprised they could get that performance with an animation."

    Turns out the title is misleading - nanite only works with static meshes, so DOTS and baked GPU animation still wins :). However, I have seen UE4 assets that do baked GPU animations so I bet it's just a matter of time until someone gets baked GPU animation working with nanite on UE5...
     
  42. eggsamurai

    eggsamurai

    Joined:
    Oct 10, 2015
    Posts:
    95
    I always feel like.. unity is too busy to make preview feature verified. Unity is something like an industry solution, but not a game engine..... sigh... Dots animation is not not ready for productive, the performance is really bad in some situation.
     
    lclemens likes this.
  43. varnon

    varnon

    Joined:
    Jan 14, 2017
    Posts:
    52
    I saw the post somewhere too (but without the dancing headline) and had similar thoughts. Nanite looks great, but I think people are overhyping it a little. Or hyping it outside of its use case maybe. I think GPU animation would be tricky, but maybe there is a way to match the shader up with whatever verts are being used by the nanite mesh.
     
    lclemens likes this.
  44. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
  45. JesOb

    JesOb

    Joined:
    Sep 3, 2012
    Posts:
    1,070
    lclemens likes this.
  46. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    It came out in prerelease on steam yesterday and I bought it. After playing for about 1/2 an hour it crashed and popped up a UE4 game engine error :D. Wonder how they got 70k units to work in UE4. I can't even get that many pathfinding units to work with DOTS with vertex animation.
     
  47. Arnold_2013

    Arnold_2013

    Joined:
    Nov 24, 2013
    Posts:
    254
    Can you rotate the camera and see models from all sides? I think in "They are Billions" they used a 2D isometric flipbook rendering to gain a lot of performance, but as a result your view is always in the same direction. I assume 3 or 4 vertices per unit + flipbook texture would be more performant than GPU vertex animation.

    If your town starts at a fixed spot and the waves start at fixed spots, there might not be any pathfinding necessary. Just a precomputed flowfield pointing to your town.
     
  48. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Haha you're right! The flipbook technique in They are Billions was so obvious, but in Age of Darkness they did such a good job at making it look 3D that I just assumed it was 3D! I think part of what fooled me is that they have shadows on everything - characters, buildings, and even trees. Now that I'm looking for it, I can tell it's 2D by moving the camera left and right - there's no parallax effect.

    You could be right about the pathfinding as a flow-field too. The maps are randomly generated, but they could precompute those flow fields whenever the new map is generated. The enemies do walk around things like rocks trees and none of the characters in the game ever occupy the same space. I haven't played The are Billions, but from the trailers it looks to me like there is zero collision avoidance.

    I wasn't able to get far enough to build any turrets that cause explosions (the game is really hard even on normal difficulty), but looking at the trailers, I'm pretty sure there is no physics engine.
     
  49. Krajca

    Krajca

    Joined:
    May 6, 2014
    Posts:
    347
    Do you have a repo or any other material for this?
     
    lclemens likes this.
  50. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    701
    Has anyone tried upgrading to Unity 2021 with Hybrid Renderer 0.51.0-preview.32 and URP 12.1.7? As soon as I did that, I started getting errors:

    Shader error in 'CookedAnimation/UnlitFrame': 'LoadDOTSInstancedData_float': no matching 1 parameter function at line 152 (on d3d11)
    Compiling Subshader: 0, Pass: <Unnamed Pass 0>, Fragment program with DOTS_INSTANCING_ON INTERPOLATE
    Platform defines: SHADER_API_DESKTOP UNITY_ENABLE_DETAIL_NORMALMAP UNITY_ENABLE_REFLECTION_BUFFERS UNITY_LIGHTMAP_RGBM_ENCODING UNITY_LIGHT_PROBE_PROXY_VOLUME UNITY_PBS_USE_BRDF1 UNITY_SPECCUBE_BLENDING UNITY_SPECCUBE_BOX_PROJECTION UNITY_USE_DITHER_MASK_FOR_ALPHABLENDED_SHADOWS
    Disabled keywords: ALPHA_CLIP INSTANCING_ON LOOP_ANIMATION SHADER_API_GLES30 UNITY_ASTC_NORMALMAP_ENCODING UNITY_COLORSPACE_GAMMA UNITY_ENABLE_NATIVE_SHADOW_LOOKUPS UNITY_FRAMEBUFFER_FETCH_AVAILABLE UNITY_HALF_PRECISION_FRAGMENT_SHADER_REGISTERS UNITY_HARDWARE_TIER1 UNITY_HARDWARE_TIER2 UNITY_HARDWARE_TIER3 UNITY_LIGHTMAP_DLDR_ENCODING UNITY_LIGHTMAP_FULL_HDR UNITY_METAL_SHADOWS_USE_POINT_FILTERING UNITY_NO_DXT5nm UNITY_NO_FULL_STANDARD_SHADER UNITY_NO_SCREENSPACE_SHADOWS UNITY_PBS_USE_BRDF2 UNITY_PBS_USE_BRDF3 UNITY_PRETRANSFORM_TO_DISPLAY_ORIENTATION UNITY_UNIFIED_SHADER_PRECISION_MODEL UNITY_VIRTUAL_TEXTURING

    Shader error in 'CookedAnimation/UnlitFrame': undeclared identifier 'unity_DOTSInstancingF4_Metadata__SpeedInst' at line 152 (on d3d11)
    Compiling Subshader: 0, Pass: <Unnamed Pass 0>, Fragment program with DOTS_INSTANCING_ON INTERPOLATE
    Platform defines: SHADER_API_DESKTOP UNITY_ENABLE_DETAIL_NORMALMAP UNITY_ENABLE_REFLECTION_BUFFERS UNITY_LIGHTMAP_RGBM_ENCODING UNITY_LIGHT_PROBE_PROXY_VOLUME UNITY_PBS_USE_BRDF1 UNITY_SPECCUBE_BLENDING UNITY_SPECCUBE_BOX_PROJECTION UNITY_USE_DITHER_MASK_FOR_ALPHABLENDED_SHADOWS
    Disabled keywords: ALPHA_CLIP INSTANCING_ON LOOP_ANIMATION SHADER_API_GLES30 UNITY_ASTC_NORMALMAP_ENCODING UNITY_COLORSPACE_GAMMA UNITY_ENABLE_NATIVE_SHADOW_LOOKUPS UNITY_FRAMEBUFFER_FETCH_AVAILABLE UNITY_HALF_PRECISION_FRAGMENT_SHADER_REGISTERS UNITY_HARDWARE_TIER1 UNITY_HARDWARE_TIER2 UNITY_HARDWARE_TIER3 UNITY_LIGHTMAP_DLDR_ENCODING UNITY_LIGHTMAP_FULL_HDR UNITY_METAL_SHADOWS_USE_POINT_FILTERING UNITY_NO_DXT5nm UNITY_NO_FULL_STANDARD_SHADER UNITY_NO_SCREENSPACE_SHADOWS UNITY_PBS_USE_BRDF2 UNITY_PBS_USE_BRDF3 UNITY_PRETRANSFORM_TO_DISPLAY_ORIENTATION UNITY_UNIFIED_SHADER_PRECISION_MODEL UNITY_VIRTUAL_TEXTURING

    A Hybrid Renderer V2 batch is using the shader "CookedAnimation/UnlitFrame", but the shader is either not compatible with Hybrid Renderer V2, is missing the DOTS_INSTANCING_ON variant, or there is a problem with the DOTS_INSTANCING_ON variant.
    UnityEngine.GUIUtility:processEvent (int,intptr,bool&)

    I am under the impression that HR 0.51 can be used with Unity 2021 and URP 12.1.7. Was that a false assumption, or is there some sort of backward incompatibility that causes these errors?

    The shader that quit working after the upgrade is this one: one: https://gitlab.com/lclemens/animati...Cooker/Resources/CookedAnimationPlayer.shader