Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Feedback Wanted: Mesh scripting API improvements

Discussion in 'Graphics Experimental Previews' started by Aras, May 26, 2019.

  1. MUGIK

    MUGIK

    Joined:
    Jul 2, 2015
    Posts:
    481
    +1 for Mesh.ApplyWritableMeshData method.
    I don't have numbers on how long it takes to call Mesh.AllocateWritableMeshData, but I'm assuming it costs a bit.
    And the ability to reuse mesh data for things that changes every frame would also be really nice!
     
    pragmascript and awesomedata like this.
  2. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    Here's all the code I currently use to implement non-disposing ApplyMeshData methods (plus a way to slice MeshDataArrays) https://gist.github.com/DaZombieKiller/b42e5847d650781a2dadc64695504f95

    Note that it makes use of System.Runtime.CompilerServices.Unsafe, so you'll need that in your project.
     
    GeorgeAdamon and MUGIK like this.
  3. SirIntruder

    SirIntruder

    Joined:
    Aug 16, 2013
    Posts:
    49
    Code (CSharp):
    1.  
    2.                 meshDataArray = Mesh.AllocateWritableMeshData(1);
    3.                 var meshData = meshDataArray[0];
    4.                 meshData.SetVertexBufferParams(vertexCount, TerrainVertex.VertexAttributeDescriptors);
    5.                 meshData.SetIndexBufferParams(indexCount, IndexFormat.UInt32);
    6.                 meshData.subMeshCount = 1;
    7.  
    8.                 var vertexBuffer = meshData.GetVertexData<TerrainVertex>();
    9.                 var indexBuffer = meshData.GetIndexData<int>();
    10.  
    11.                // schedule job that writes to vertexBuffer
    12.                // schedule job that writes to indexBuffer
    13.  
    When I try this, I get errors on the scheduling of the second job, saying:

    Code (CSharp):
    1. InvalidOperationException: The previously scheduled job VertexBufferJob writes to the UNKNOWN_OBJECT_TYPE VertexBufferJob .vertices. You must call JobHandle.Complete() on the job VertexBufferJob, before you can read from the UNKNOWN_OBJECT_TYPE safely.
    This seems like a bug to me - why shouldn't I be able to write to index and vertex buffer in parallel?

    I tried to have indexJob take dependency on vertexJob, but it doesn't fix the error - only solution is to run those jobs hard-sequential. edit -
    NativeDisableContainerSafetyRestriction also avoids the error, without causing any visible issues.


    Additional note - I did this previously with manual allocation of native arrays, which were then set to mesh using SetVertex/IndexData() - my understanding is that this approach would save me one extra copy of the entire meshData.
     
    Last edited: May 30, 2021
  4. RecursiveEclipse

    RecursiveEclipse

    Joined:
    Sep 6, 2018
    Posts:
    298
    Is there no good way to dispose of MeshDataArray without introducing a sync point? MeshDataArray can't be disposed using .WithDisposeOnCompletion in an Entities.ForEach, Job.WithCode, or [DeallocateOnJobCompletion], and it doesn't even have a Dispose method with a JobHandle parameter.

    Is there a good practice/workaround here? Or is there a technical reason? This is my biggest gripe with this API, fast and DOTS friendly but you can also take a significant hit.
     
    pragmascript and MUGIK like this.
  5. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    +1 for Mesh.ApplyWritableMeshData :)

    I think a common situation is generating a bunch of meshes in jobs without knowing exactly how big they will be.

    Right now, I'm handling this by appending triangles to several vertex NativeLists in jobs and then once they're all done, calling Mesh.SetVertexBufferParams, Mesh.SetVertexBufferData, Mesh.SetIndexBufferParams, Mesh.SetIndexBufferData, Mesh.SetSubMesh and finally Graphics.DrawMesh all on the main thread for each of these vertex lists. The nice thing is that the lists can be reused every frame and don't need to be deallocated.

    It seems like this API would allow for moving some of those calls into jobs. It does seem like you have to call SetVertexBufferParams twice: once to preallocate some maximum number of vertices and then again to set the actual number of vertices once you're done. I'm not sure what kind of performance hit that might have, or even if that's safe to do. It would be nice to be able to avoid that. And then since you have to call Mesh.ApplyWritableMeshData on the main thread, is it actually any faster or is it then just doing a bunch of copying and it's basically the same as before?

    While we're hand-wavy wishlisting ;), it sure would be nice to be able to queue up the renders in jobs too. I know Mesh and Material can't go in jobs currently, but it would be nice to have some kind of job-compatible versions or handle or something so we can do the equivalent of Mesh.ApplyWritableMeshData and Graphics.DrawMesh in a job and not have to stall to call it a bunch of times sequentially on the main thread.
     
  6. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    I think it's worth clarifying which API is more performant with which usage, now that we have API that support:

    - Mesh.SetVertices that can read a slice of NativeArray
    - Mesh.SetVertexBufferData that do more or less the same.
    - Mesh.ApplyAndDisposeWritableMeshData that can apply the same to multiple meshes (or merge them into a single mesh).

    Say we are procedurally generating a mesh, it's not clear why using the 3rd option, aka the MeshData API, is better for performance. As the allocation is going to happen no matter what we do.

    In this case, creating NativeArray manually then write them through Mesh.SetVertexBufferData appear faster, because we can pre-allocate a certain amount of vertices and reuse that array through slices. While Mesh.ApplyAndDisposeWritableMeshData kills this intermediate storage.

    Let's say we expand above example into generating multiple meshes in parallel, it is also not clear if Mesh.AllocateWritableMeshData is more performant than manual NativeArray approach. The main reason is we probably don't know exactly how many meshes we will generate, it's hard to tell if allocating a lot of them in advanced is a good strategy.

    In short, MeshData seems very good for fast access, but for write, I haven't fully understood the benefit.
     
    pragmascript, awesomedata and MUGIK like this.
  7. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    Having got a bit of experience with this the last few days, I think I can answer this.

    The benefit is, if you're generating multiple meshes in jobs, you can move much of the setup and copying of mesh data into jobs too, thus taking it off the main thread.

    It's fairly minor. I went from sequentially doing it all on the main thread in about 0.3 ms to moving that all to jobs and then only doing the final draw call on the main thread in 0.065 ms. The time didn't disappear, it just got spread out over multiple threads. But I also don't have to stall for all the generating jobs to finish before I can work on applying the mesh data. So it's not nothing.

    Now, I'm still using jobs to build all the procedural mesh data in NativeLists first, and then just copying the results. It's essentially the same as Mesh.SetVertexBufferData, but just spread out in jobs:

    Code (CSharp):
    1. MeshData.SetVertexBufferParams( vertexCount, SurfaceVertex.Layout );
    2. MeshData.SetIndexBufferParams( vertexCount, IndexFormat.UInt16 );
    3. NativeArray<SurfaceVertex>.Copy( Vertices, 0, MeshData.GetVertexData<SurfaceVertex>(), 0, vertexCount );
    4. NativeArray<ushort>.Copy( Indices, 0, MeshData.GetIndexData<ushort>(), 0, vertexCount );
    5. MeshData.subMeshCount = 1;
    6. MeshData.SetSubMesh( 0, new SubMeshDescriptor( 0, vertexCount ), meshUpdateFlags );
    You can get around the repeated allocation/dispose with using reflection, like someone posted earlier—though I'm not 100% sure on the safety and I'm still debugging some graphical glitching I get occasionally. Hopefully, Unity gives us an official Mesh.ApplyWritableMeshData call. But that's a somewhat minor performance difference.

    So TLDR, it's an improvement, but not necessarily game changing. Mileage may vary. Probably it opens the door to other improvements later.

    It would be nice if instead of building in NativeLists, there was an official way to add to MeshData iteratively, so we could save the buffer memory and copy time and squeeze out a little more performance that way.
     
    awesomedata and bitinn like this.
  8. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    Thx for the write up, you clear up a key confusion here, the cost of these 2 approaches:

    - Mesh.AllocateWritableMeshData()
    - MeshData.SetVertexBufferParams()

    vs

    - Mesh.SetVertexBufferData()

    Bases on your benchmark, while there are allocations on main thread with Mesh.AllocateWritableMeshData, the penalty is worth the tradeoff because now we can allocate the majority of vertices and index buffer in jobs, that in turn requires less sync point in main thread.
     
  9. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    Now that I am using MeshDataArray & Mesh.ApplyAndDisposeWritableMeshData() more extensively, I start to see their problems too.

    - Problem 1: In procedural generation we generally don't know how many meshes we need beforehand (think LOD and Culling). The lack of ways to allocate a larger MeshDataArray and then slice them before apply is very problematic.

    - Problem 2: MeshData related jobs are generally the longer running jobs, and we would like to run them in parallel as much as possible, but because of Problem 1 we end up needing to apply multiple MeshDataArray. Can you put them in a NativeArray<MeshDataArray>? Nope. Can we somehow apply MeshData instances? Nope, AFAIK.

    - Problem 3: Now adding data streaming into the mix, in my main thread there are File IO or GC Alloc happening, so ideally we would need to interleave mesh generation jobs in between. Can we somehow wait for job handle to complete before applying MeshDataArray from jobs? Nope, AFAIK.
     
    pragmascript and awesomedata like this.
  10. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    There's an internal method that allows you to do this, so you can call it via reflection. Of course, that's very much not ideal.
     
  11. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    Agree on all of these.

    You might be able to deal with problem 2 by putting the individual MeshData structures in a NativeArray.

    Alternatively, I only have one MeshDataArray, since I happen to know the maximum mesh count I might need in advance, even if I won't use them all. So what I do is create a job for every MeshData in the array, add the job handle to a NativeList<JobHandle> and then use JobHandle.CombineDependencies to make the render code that applies and draws the meshes wait until all the generating jobs are complete, since you can't ApplyToMeshes or DrawMesh in jobs. It turns out those calls are quite inexpensive, though. Most of the work is the mesh generation and setting mesh data, as you point out, and those can run in parallel.

    So you could probably do the same thing, just with multiple MeshDataArrays. Or, if you have some idea what your upper limit might be, preallocate more than you need at the cost of some memory and use only what you need.

    All that said, I hope to see the Mesh API evolve to consider these issues.
     
    bitinn likes this.
  12. Unarmed1000

    Unarmed1000

    Joined:
    Sep 12, 2014
    Posts:
    22
    A bit late to the party but I didnt see anyone mention the strange fact that Mesh.subMeshCount can mess with your index buffer size.

    So you have this perfectly sized index buffer that fits your needs, then you call subMeshCount and suddenly it decides to change the index buffer size just because you now need less sub meshes.

    Seriously who thought that was a good idea?

    Does anyone have a good idea to workaround the issue?

    EDIT: the workaround is to only use SetSubMeshes as it also sets the subMeshCount without messing with the index buffer.
     
    Last edited: Nov 10, 2021
    a436t4ataf likes this.
  13. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    No, it's a terrible idea indeed. But such is life of public APIs: we made a mistake ten years ago, and now we can't ever fix the behavior, since there's code out there depending on subMeshCount working exactly like it does. The documentation of it explicitly has a note about this strange behavior. https://docs.unity3d.com/ScriptReference/Mesh-subMeshCount.html
     
    FM-Productions and Thaina like this.
  14. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    It's always strange to me that Unity will use this as a reason to not fix something horrible in an API, like this, or linear EXR files being completely unusable because they get gamma corrected when they shouldn't, but will break all rendering every few months with the SRPs and make us all diff every file in the depot and figure it out with no documentation or warning.

    That seems, rather, inconsistent, to put it mildly.

    There's always code depending on something working "exactly like it does". With proper documentation and patch notes, it's acceptable to make changes to those APIs and update them to make sense or be fixed, and to force people to update that code.

    It seems like half of Unity runs under the idea that no changes to APIs can ever break things, and the other half seems to think breaking everything is what they get paid for. Both, IMO, are too extreme.
     
    TerraUnity, a436t4ataf and JesOb like this.
  15. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    I don't disagree.

    The subMeshCount behavior is perhaps not "completely idiotic" though. Like, ok right now I would not do that behavior, but it also kinda makes sense for that behavior to happen in some cases. As long as it's pointed out in the documentation, I think it's okay-ish. And that's exactly why some months ago I've put the note in the documentation.

    The other issue you mentioned (EXR vs color spaces), I was under impression that someone was fixing that many months ago. But I could be imagining things. Can you remind of case # so that I can remember what exactly it's about and chase things up?

    As per the "SRPs keep changing things all the time without any notice or documentation", yeah I'm with you there. I would try to not do that if I were working on SRPs. But I'm not, so :/
     
    laurentlavigne and TerraUnity like this.
  16. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Case 1080483 is one I reported in 2018, though I'm pretty sure @bgolus reported it as well before then. Essentially if you're in Gamma rendering mode and you save a linear EXR texture, it gamma corrects it even though it's linear. This was closed because supposedly things were relying on this behavior, but I can't imagine what would possibly rely on that behavior, since every time you serialize and deserialize the data, even if you correct for it, you get different results.

    It's funny, large parts of MicroSplat had to be designed differently to work around this issue, and it's the reason every MicroSplat object needs a component on it. Without being able to serialize a 16bit linear texture reliably, I had to store things in a scriptable object and generate the data on demand. It makes MicroSplat heavier than I would like for object workflows.

    In newer projects I avoid this by using ScriptableAssetImporter to turn my files into actual textures at import time rather than having to stick a component on everything. But that only works for files created inside Unity.
     
  17. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
  18. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,933
    I have noticed that many (many) Unity teams don't seem to be aware of the ability to 'obsolete' API calls, and that this is supported right down to the compiler level. In other platforms (both engines and languages) that are 10+ years old we're accustomed to seeing vast amounts of obsoleted calls, partly because it's a way of cleaning up the codebase but - IME on the other side of the fence - *mostly* because it lets us put in compiler-enforced, IDE-supported, notes to current engineers downstream.

    e.g.: [Obsolete("This call had some inconsistencies, use [the other version] instead - it works identically but fixes the mistakes; if the other one works for your code you should use it from now on; this version will be removed in a future release", false]

    ... that in a later release gets that 'false' upgraded to a 'true'
    ... and then one release later the API call gets deleted. And no-one will complain! They've had auto compiler warnings about it for (at least) two releases, and auto compiler errors for a release.

    I've always thought it should go something like:

    Unity 2025.1.x / .2.x / .3.x : [Obsolete, false]
    Unity 2025.4.x LTS: [Obsolete, true]
    Unity 2026.1.1 : (DELETED)

    ... is something like that a possibility here?@Unarmed1000's observation that using the other API call avoids the (in hindsight) bad behaviour and as a developer the HUGE thing I'd want is some flag that 'you probably dont want to use his method, but you can if your code depends on it' -- i.e. an [Obsolete("..", false)] in the codebase would be perfect.

    EDIT: and for the record: I do this in my assets, to great effect! When someone is dependent upon a call I think was badly designed (I f***ed-up when I wrote it originally) - and if they actually still need that behaviour - I get contacted when they notice the new Warnings. More often: they see the warning, read the note, look at their code, realise that their own code would be cleaner NOT being dpeendent on this odd behaviour, clean up their code, save, and problem solved.
     
    MUGIK likes this.
  19. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    They most definitely are. In all our current Editor + Engine (not counting packages) public APIs, I see 1886 Obsolete attributes right now :)
     
    a436t4ataf likes this.
  20. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,933
    That's great - I must be using the wrong bits of Unity :D because I rarely see them (have submitted some bug reports suggesting/requesting them in the past, where I've seen particularly gnarly cases which are crying out for them).

    Anyway ... a possible solution/workaround for the Mesh issues here?
     
  21. pragmascript

    pragmascript

    Joined:
    Dec 31, 2010
    Posts:
    107
    Let's say I have a procedural mesh where I want to add a handfull of triangles every frame, does this new Mesh API help with that at all or do I have to make a completeley new Mesh every frame where 99% of the vertices are copied from the old mesh?
     
  22. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,551
    That's less an API issue and more a data management understanding issue. Whether on GPU or CPU, if you want to add more elements to an array, a new array of the larger size has to be made to contain it. In the case of a list, it has a usually bigger sized hidden array inside to keep filling with elements, but when it reaches list capacity, it too must create a larger array and copy data over

    You can do similar on the GPU. Make a buffer large enough to allow growth, and increase a counter as you add verts.
    You can then use the indirectdrawprocedural methods to render the mesh, using that current vert count, copied to your "indirect arguments" buffer for it using the compute.CopyCount() function.
     
  23. pragmascript

    pragmascript

    Joined:
    Dec 31, 2010
    Posts:
    107
    Yes I'm aware that my initial buffer would need to be big enough to hold the data, but that's not really usefull when the API neither supports partial updates nor let's me retain writeable arrays after a call to ApplyAndDisposeWritableMeshData.

    Yes, I probably will have to do that instead of doing it CPU side, since the API doesn't seem to support that usecase.



     
  24. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    864
    I'd say it's an API issue. The Mesh class index buffer (GraphicsBuffer) could support dynamic index arrays (Append/Consume), but it does not. This means that every time you have a mesh with varying index count you need to write your own renderer + shader relying on DrawProceduralIndirect (like you propose above). When you work a lot with custom particle systems, marching cubes and the like, this is a huge annoyance.
     
    pragmascript likes this.
  25. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    Seems there is a bug with the mesh API.

    When I assign BlendWeight and BlendIndices vertex attributes, the mesh position and normal attributes automatically change from float16 x 4 to float32 x 3.

    It's very odd and makes no sense. Is there a known workaround?

    The images below show the difference between assigning the vertex attributes and not doing so.

    @tteneder can you help or do you know who can?
     

    Attached Files:

  26. Kichang-Kim

    Kichang-Kim

    Joined:
    Oct 19, 2010
    Posts:
    1,012
    It is by design. See this:
    https://forum.unity.com/threads/cas...float32-unintentionally.1051406/#post-6825881
     
    joshuacwilde likes this.
  27. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
  28. Kichang-Kim

    Kichang-Kim

    Joined:
    Oct 19, 2010
    Posts:
    1,012
    Unfortunately, mobile has affected this too.
     
  29. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    Thanks, yes I tested and that appears to be the case.

    Any unity devs that can weigh in on why this is the case? It seems odd that on mobile (where FP16 makes the biggest difference) is where FP16 skinning is not supported. Would certainly help improve memory for us since we have a lot of unique skinned meshes.
     
  30. kenamis

    kenamis

    Joined:
    Feb 5, 2015
    Posts:
    387
    wouldn't the hardware simply drop the extra precision so you're not paying any price computationally? The only price you'd be paying is a little more memory usage, but you're not saving half memory because you're going from 4 dimensions to 3 dimensions (at twice byte size)?
     
  31. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    All mobile devices support f32. So it is required to use f16/half to get lower GPU cost (for GPU skinned meshes). Also the memory decrease is still substantial as we have a significant amount of high poly (for mobile) skinned meshes.
     
  32. JesOb

    JesOb

    Joined:
    Sep 3, 2012
    Posts:
    1,109
    Slowest part of our devices (all devices Mobile, PC, Consoles...) is Memory

    So most ways to optimize your game is Make You Memory Access patterns to be as optimal as it can be
    DOTS exists exactly because of this.

    Memory Bandwidth is one of parameter you care about when try to optimize game on every platform.
    Making you data as small as possible is known optimization technique. It is common to make part of rendering data be computed in shader instead of passing it in mesh or texture in ready state.
    Some of it:
    - Computing Bintangent when skinning
    - Compressing Normals in GBuffer
    - Arm Mali Compressed Back Buffer format
    - ...

    So making input mesh smaller will definitely increase performance of rendering because of better memory cache usage
     
  33. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    705
    This is a slight side note but it feels like the StaticBatchingUtility.Combine API could do with a similar update.

    I'm about to use it to run a quick test on something and its pretty shocking that the only inputs are either a GO or an array of GO.

    Feels like it could do with some updates for things like:

    Passing lists to prevent garbage with .ToArray
    Passing an array but defining a length of what to use from it
    Passing whatever this needs directly instead of just GO to save some lookups (E.G MeshFilter or Mesh, not sure which it tracks)
     
  34. MUGIK

    MUGIK

    Joined:
    Jul 2, 2015
    Posts:
    481
    Hi
    Hope this is still a relevant place for sharing feedback.
    I'm making mesh splitting utility and trying to squeeze maximum from Jobs, Burst, and the new Mesh API. Unity 2020.3.19f1.

    1. To use WritableMeshData we need to know the exact VertexData layout, but this is not always possible. I would love to have some API that allows me to do smth like this:
    Code (CSharp):
    1. var vertexAttributeDescriptorList = new List<VertexAttributeDescriptor>();
    2.  
    3. vertexAttributeDescriptorList.Add(new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3));
    4.  
    5. // some conditional attributes
    6. if (hasNormals)
    7.     vertexAttributeDescriptorList.Add(new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3));
    8.  
    9. if (hasUv0)
    10.     vertexAttributeDescriptorList.Add(new VertexAttributeDescriptor(VertexAttribute.TexCoord0, VertexAttributeFormat.Float32, 2))
    11. // etc...
    12.  
    13. Mesh.MeshData meshData = // writable mesh data
    14. meshData.SetVertexBufferParams(length, vertexAttributeDescriptorList.ToArray());
    15.  
    16. var writer = meshData.GetVertexDataWriter(
    17.     validateDimension: true, // validate input provided into writer with AppendXXX methods
    18.     validateOrder: true, // validate the correct order of provided data
    19. );
    20.  
    21. // We know exact attributes order which means we can write only necessary data
    22. for (var i = 0; i < length; i++)
    23. {
    24.     writer.AppendPosition(vertices[i]); // there are overloads that accept float3, float4, half3, half4, etc
    25.  
    26.     if (hasNormals)
    27.         writer.AppendNormal(normals[i]);
    28.  
    29.     if (hasUv0)
    30.         writer.AppendTexcoord0(uv0[i]);
    31. }
    32.  
    33. // So instead of NativeArray<> with explicit VertexData struct we can use this quite convenient writer imho.
    34. // After all, we just write bits into memory, why do I need to create VertexData struct for each combination of attributes?
    35. // VertexData_Pos, VertexData_PosUv0, VertexData_PosNormUv0, VertexData_PosColorUv0, VertexData_PosNormColorUv0... This gets complicated pretty quickly.
    36. // And I need to create explicit jobs for each VertexData_XXX type.
    37.  
    2. Because of the problem [1] I need to create explicit attribute arrays. I don't want to recreate them each time, but I'm also not able to dispose static readonly. What should I do?
    Code (CSharp):
    1. public static class PosNormUv0
    2. {
    3.     public static readonly NativeArray<VertexAttributeDescriptor> Attributes = // how to dispose this?
    4.         new NativeArray<VertexAttributeDescriptor>(new[]
    5.         {
    6.             new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3),
    7.             new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3),
    8.             new VertexAttributeDescriptor(VertexAttribute.TexCoord0, VertexAttributeFormat.Float32, 2),
    9.         }, Allocator.Persistent);
    10.  
    11.     [StructLayout(LayoutKind.Sequential)]
    12.     private struct VertexData
    13.     {
    14.         public float3 position;
    15.         public float3 normal;
    16.         public float2 uv0;
    17.     }
    18.  
    19.     // jobs for this specific PosNormUv0.VertexData
    20.     // Job needs the Attributes array to setup mesh, that's why I can't use managed array VertexAttributeDescriptor[]
    21. }

    3. It seems like a bug, but calling
    Code (CSharp):
    1.  
    2. Mesh.MeshData.SetSubMesh(0, new SubMeshDescriptor(0, length, MeshTopology.Triangles), MeshUpdateFlags.Default);
    3.  
    inside IJob doesn't actually recalculate bounds. Docs says:
    The bounds, firstVertex and vertexCount values are calculated automatically by Mesh.SetSubMesh, unless MeshUpdateFlags.DontRecalculateBounds flag is passed.
    Docs: https://docs.unity3d.com/2020.1/Doc...rence/Rendering.SubMeshDescriptor-bounds.html
    Docs for Mesh.MeshData.SetSubMesh just refers to Mesh.SetSubMesh.
    Explicitly settings bounds in SubMeshDescriptor also doesn't work.
     
    pragmascript likes this.
  35. yasirkula

    yasirkula

    Joined:
    Aug 1, 2011
    Posts:
    2,879
    For skinned meshes with two vertex streams, am I right to assume that the following layout is correct:

    Stream 0: positions, normals, tangents, blend weights, blend indices
    Stream 1: colors, texture coordinates

    The documentation here got me confused because ironically, it doesn't mention where the blend weights and blend indices should go in skinned meshes: https://docs.unity3d.com/ScriptReference/Rendering.VertexAttributeDescriptor.html
     
  36. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    I'm not sure when it was introduced, but if you get the streams wrong you should get an error message. From experimentation I figured out that the intended layout is:
    Stream 0: positions, normals, tangents
    Stream 1: colors, texture coordinates
    Stream 2: blend indices, blend weights
     
    Thaina likes this.
  37. yasirkula

    yasirkula

    Joined:
    Aug 1, 2011
    Posts:
    2,879
    Thank you for your answer! The 2 stream approach I've suggested is actually working without any issues, so I'm unsure if your suggestion is the most optimal layout or mine, or something entirely else. It'd be nice to have explicit instructions about this in the documentation I've linked.

    PS. The documentation says "skinned meshes often use two vertex streams" but doesn't indicate the blend indices and blend weights stream indices explicitly.
     
  38. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    In the latest Unity 2023 build, using the following code sample:
    Code (CSharp):
    1. var array = Mesh.AllocateWritableMeshData(1);
    2. var data  = array[0];
    3. data.SetVertexBufferParams(1,
    4.     new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3, 0),
    5.     new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3, 0),
    6.     new VertexAttributeDescriptor(VertexAttribute.Tangent, VertexAttributeFormat.Float32, 4, 0),
    7.     new VertexAttributeDescriptor(VertexAttribute.Color, VertexAttributeFormat.UNorm8, 4, 0),
    8.     new VertexAttributeDescriptor(VertexAttribute.TexCoord0, VertexAttributeFormat.Float32, 2, 0),
    9.     new VertexAttributeDescriptor(VertexAttribute.BlendWeight, VertexAttributeFormat.Float32, 4, 0),
    10.     new VertexAttributeDescriptor(VertexAttribute.BlendIndices, VertexAttributeFormat.UInt32, 4, 0)
    11. );
    12. Mesh.ApplyAndDisposeWritableMeshData(array, new Mesh());
    I get the following runtime warning:
    upload_2022-10-13_21-0-39.png
     
    Thaina and yasirkula like this.
  39. yasirkula

    yasirkula

    Joined:
    Aug 1, 2011
    Posts:
    2,879
    Awesome, thanks! That's as descriptive as it could get. I'll follow your vertex layout then (I didn't get this warning in 2021 LTS). I hope Unity updates the documentation to state that "skinned meshes often use three vertex streams".
     
  40. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,063
    Note sure if feedback is open, but for me applying mesh data is not that useful.
    I am generating meshes and can only know the index/vert counts when the calculations are done (then I can read out the length of the list). Right now I would need to create a buffer afterwards and then loop through all indices/verts again, which isnt optimal. It would be great if you can get functionality closer to .SetVertices, etc in burst as well
     
  41. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    I would find it very useful to be able to create an AllocateWritableMeshData with mesh data/formats setup from a managed mesh. In my current work I'm doing a lot of mesh modifications, rather than generating new meshes entirely - and the meshes I'm modifying could have any number of uv channels, vertex colors, etc, or not, creating a large combination of possible vertex structures. However, I'm only modifying positions of the vertices, not all these other channels. If I could essentially copy the mesh and write a job to modify the vertices, without having to deal with what is likely hundreds if not thousands of possible mesh layouts, I'd be able to use MeshData for the task instead of having to go through the SetVertices API, which I believe would be faster.
     
    aras-p, MUGIK and DevDunk like this.
  42. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    This was added in Unity 2023.1.0a12, there are now overloads of Mesh.AllocateWritableMeshData that take Mesh, Mesh[] and List<Mesh>.
     
    genieMarida, tteneder, aras-p and 2 others like this.
  43. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,063
    QA closed my issue for this as its 'by design' (but QA made a different sample project, so not sure what they made)
    But reading from a generated mesh (which has markAsReadable disabled) works in editor without warning and in builds gives the error the mesh is not readable.
    The editor should at least give a warning for this imo
     
    Thaina and MUGIK like this.
  44. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    705
    Does anyone know how Mesh.MarkDynamic works with the jobbed versions?

    I'm trying to decide if there is a chance it matters for the final write or if its completely N/A since the jobs side of it will use different buffers
     
  45. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    705
    It seems to be a mesh fighting week for me this week. Does anyone know how to interpret a position as bytes on the shader side?

    For example:

    Code (CSharp):
    1. new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.UInt8, 4)
    Shader side:

    Code (CSharp):
    1. half4 vertex        : POSITION;// <<< ???
    I've tried things using uint and then using bit operations but no joy. My mesh builds fine except it wont render and has no preview, I can see the data is there though.

    I stumbled on this that I worry might be relevant to explain what is going on under the hood:

    https://stackoverflow.com/questions...-to-float-and-reinterpretting-it-back-to-uint
     
  46. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,168
    `half` is `VertexAttributeFormat.Float16`
     
  47. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    705
    It is indeed, but I'm trying to figure out the byte equivalent. E.G fixed is the step down from half in hlsl but in Unity shaders it will just auto interpret as half depending on the hardware
     
  48. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,168
    I can't understand what you really trying to do, is it something like this?

    https://forum.unity.com/threads/cant-pass-an-integer-to-a-shader.950419/#post-6196543

    To be honest you should show the whole code of what you try to do, not just a vague description with undecipherable 2 lines of code
     
  49. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    705
    The two lines I posted really are the crux of the problem but I could have explained better. I'm trying to send a position as bytes, so x,y,z are a bytes worth of values each, hence UInt8, 4. 4 because that is how we have to pack data for the GPU in multiples of 4.

    I can get away with this because I'm making a game like minecraft which splits the world into chunks.

    The problem though is how to interpret bytes shader side, thus the second line. I worry that under the hood Unity isn't dealing with integer values properly because GPUs are setup to deal with floating point values.

    One thing I've come across is the hlsl function asuint and I think that might be the direction I need to take. I'm mainly trying to see if anyone else has tried to compress mesh data as harshly as using a byte and any gotchas they had to deal with
     
  50. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,168
    I need to inform you, this is not helping

    The point you investigated that was the crux of problem is only make sense in the context that you have written the whole shader and C# code and know how it will do and how it will interact with each other only from your project and in your mind. But every other human in the whole world are not knowing anything about that

    I guess that you try to send uint to shader. And I can say we normally don't do integer operation in graphic cards. We might compress byte into color and manipulate it with addition and multiplication but that was useful only for some operation

    If you try to do bitwise operation, burstcompiler and simd operation might be more suitable