Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. The 2022.2 beta is now available for testing. To find out what's new, have a look at our 2022.2 feature highlights.
    Dismiss Notice
  3. We are updating our Terms of Service for all Unity subscription plans, effective October 13, 2022, to create a more streamlined, user-friendly set of terms. Please review them here: unity.com/legal/terms-of-service.
    Dismiss Notice
  4. Have a look at our Games Focus blog post series which will show what Unity is doing for all game developers – now, next year, and in the future.
    Dismiss Notice
Dismiss Notice
Submit bug reports tagged with #Beta2022Win_Unity when you encounter unknown issues while testing the 2022.2 betas for a chance to win a year of Unity Pro. For more information, have a look at our Beta Sweepstakes Announcement.

Feedback Wanted: Mesh scripting API improvements

Discussion in 'Graphics Experimental Previews' started by Aras, May 26, 2019.

  1. MUGIK

    MUGIK

    Joined:
    Jul 2, 2015
    Posts:
    395
    +1 for Mesh.ApplyWritableMeshData method.
    I don't have numbers on how long it takes to call Mesh.AllocateWritableMeshData, but I'm assuming it costs a bit.
    And the ability to reuse mesh data for things that changes every frame would also be really nice!
     
    pragmascript and awesomedata like this.
  2. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    208
    Here's all the code I currently use to implement non-disposing ApplyMeshData methods (plus a way to slice MeshDataArrays) https://gist.github.com/DaZombieKiller/b42e5847d650781a2dadc64695504f95

    Note that it makes use of System.Runtime.CompilerServices.Unsafe, so you'll need that in your project.
     
    MUGIK likes this.
  3. SirIntruder

    SirIntruder

    Joined:
    Aug 16, 2013
    Posts:
    49
    Code (CSharp):
    1.  
    2.                 meshDataArray = Mesh.AllocateWritableMeshData(1);
    3.                 var meshData = meshDataArray[0];
    4.                 meshData.SetVertexBufferParams(vertexCount, TerrainVertex.VertexAttributeDescriptors);
    5.                 meshData.SetIndexBufferParams(indexCount, IndexFormat.UInt32);
    6.                 meshData.subMeshCount = 1;
    7.  
    8.                 var vertexBuffer = meshData.GetVertexData<TerrainVertex>();
    9.                 var indexBuffer = meshData.GetIndexData<int>();
    10.  
    11.                // schedule job that writes to vertexBuffer
    12.                // schedule job that writes to indexBuffer
    13.  
    When I try this, I get errors on the scheduling of the second job, saying:

    Code (CSharp):
    1. InvalidOperationException: The previously scheduled job VertexBufferJob writes to the UNKNOWN_OBJECT_TYPE VertexBufferJob .vertices. You must call JobHandle.Complete() on the job VertexBufferJob, before you can read from the UNKNOWN_OBJECT_TYPE safely.
    This seems like a bug to me - why shouldn't I be able to write to index and vertex buffer in parallel?

    I tried to have indexJob take dependency on vertexJob, but it doesn't fix the error - only solution is to run those jobs hard-sequential. edit -
    NativeDisableContainerSafetyRestriction also avoids the error, without causing any visible issues.


    Additional note - I did this previously with manual allocation of native arrays, which were then set to mesh using SetVertex/IndexData() - my understanding is that this approach would save me one extra copy of the entire meshData.
     
    Last edited: May 30, 2021
  4. RecursiveEclipse

    RecursiveEclipse

    Joined:
    Sep 6, 2018
    Posts:
    298
    Is there no good way to dispose of MeshDataArray without introducing a sync point? MeshDataArray can't be disposed using .WithDisposeOnCompletion in an Entities.ForEach, Job.WithCode, or [DeallocateOnJobCompletion], and it doesn't even have a Dispose method with a JobHandle parameter.

    Is there a good practice/workaround here? Or is there a technical reason? This is my biggest gripe with this API, fast and DOTS friendly but you can also take a significant hit.
     
    pragmascript and MUGIK like this.
  5. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    341
    +1 for Mesh.ApplyWritableMeshData :)

    I think a common situation is generating a bunch of meshes in jobs without knowing exactly how big they will be.

    Right now, I'm handling this by appending triangles to several vertex NativeLists in jobs and then once they're all done, calling Mesh.SetVertexBufferParams, Mesh.SetVertexBufferData, Mesh.SetIndexBufferParams, Mesh.SetIndexBufferData, Mesh.SetSubMesh and finally Graphics.DrawMesh all on the main thread for each of these vertex lists. The nice thing is that the lists can be reused every frame and don't need to be deallocated.

    It seems like this API would allow for moving some of those calls into jobs. It does seem like you have to call SetVertexBufferParams twice: once to preallocate some maximum number of vertices and then again to set the actual number of vertices once you're done. I'm not sure what kind of performance hit that might have, or even if that's safe to do. It would be nice to be able to avoid that. And then since you have to call Mesh.ApplyWritableMeshData on the main thread, is it actually any faster or is it then just doing a bunch of copying and it's basically the same as before?

    While we're hand-wavy wishlisting ;), it sure would be nice to be able to queue up the renders in jobs too. I know Mesh and Material can't go in jobs currently, but it would be nice to have some kind of job-compatible versions or handle or something so we can do the equivalent of Mesh.ApplyWritableMeshData and Graphics.DrawMesh in a job and not have to stall to call it a bunch of times sequentially on the main thread.
     
  6. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    952
    I think it's worth clarifying which API is more performant with which usage, now that we have API that support:

    - Mesh.SetVertices that can read a slice of NativeArray
    - Mesh.SetVertexBufferData that do more or less the same.
    - Mesh.ApplyAndDisposeWritableMeshData that can apply the same to multiple meshes (or merge them into a single mesh).

    Say we are procedurally generating a mesh, it's not clear why using the 3rd option, aka the MeshData API, is better for performance. As the allocation is going to happen no matter what we do.

    In this case, creating NativeArray manually then write them through Mesh.SetVertexBufferData appear faster, because we can pre-allocate a certain amount of vertices and reuse that array through slices. While Mesh.ApplyAndDisposeWritableMeshData kills this intermediate storage.

    Let's say we expand above example into generating multiple meshes in parallel, it is also not clear if Mesh.AllocateWritableMeshData is more performant than manual NativeArray approach. The main reason is we probably don't know exactly how many meshes we will generate, it's hard to tell if allocating a lot of them in advanced is a good strategy.

    In short, MeshData seems very good for fast access, but for write, I haven't fully understood the benefit.
     
    pragmascript, awesomedata and MUGIK like this.
  7. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    341
    Having got a bit of experience with this the last few days, I think I can answer this.

    The benefit is, if you're generating multiple meshes in jobs, you can move much of the setup and copying of mesh data into jobs too, thus taking it off the main thread.

    It's fairly minor. I went from sequentially doing it all on the main thread in about 0.3 ms to moving that all to jobs and then only doing the final draw call on the main thread in 0.065 ms. The time didn't disappear, it just got spread out over multiple threads. But I also don't have to stall for all the generating jobs to finish before I can work on applying the mesh data. So it's not nothing.

    Now, I'm still using jobs to build all the procedural mesh data in NativeLists first, and then just copying the results. It's essentially the same as Mesh.SetVertexBufferData, but just spread out in jobs:

    Code (CSharp):
    1. MeshData.SetVertexBufferParams( vertexCount, SurfaceVertex.Layout );
    2. MeshData.SetIndexBufferParams( vertexCount, IndexFormat.UInt16 );
    3. NativeArray<SurfaceVertex>.Copy( Vertices, 0, MeshData.GetVertexData<SurfaceVertex>(), 0, vertexCount );
    4. NativeArray<ushort>.Copy( Indices, 0, MeshData.GetIndexData<ushort>(), 0, vertexCount );
    5. MeshData.subMeshCount = 1;
    6. MeshData.SetSubMesh( 0, new SubMeshDescriptor( 0, vertexCount ), meshUpdateFlags );
    You can get around the repeated allocation/dispose with using reflection, like someone posted earlier—though I'm not 100% sure on the safety and I'm still debugging some graphical glitching I get occasionally. Hopefully, Unity gives us an official Mesh.ApplyWritableMeshData call. But that's a somewhat minor performance difference.

    So TLDR, it's an improvement, but not necessarily game changing. Mileage may vary. Probably it opens the door to other improvements later.

    It would be nice if instead of building in NativeLists, there was an official way to add to MeshData iteratively, so we could save the buffer memory and copy time and squeeze out a little more performance that way.
     
    awesomedata and bitinn like this.
  8. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    952
    Thx for the write up, you clear up a key confusion here, the cost of these 2 approaches:

    - Mesh.AllocateWritableMeshData()
    - MeshData.SetVertexBufferParams()

    vs

    - Mesh.SetVertexBufferData()

    Bases on your benchmark, while there are allocations on main thread with Mesh.AllocateWritableMeshData, the penalty is worth the tradeoff because now we can allocate the majority of vertices and index buffer in jobs, that in turn requires less sync point in main thread.
     
  9. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    952
    Now that I am using MeshDataArray & Mesh.ApplyAndDisposeWritableMeshData() more extensively, I start to see their problems too.

    - Problem 1: In procedural generation we generally don't know how many meshes we need beforehand (think LOD and Culling). The lack of ways to allocate a larger MeshDataArray and then slice them before apply is very problematic.

    - Problem 2: MeshData related jobs are generally the longer running jobs, and we would like to run them in parallel as much as possible, but because of Problem 1 we end up needing to apply multiple MeshDataArray. Can you put them in a NativeArray<MeshDataArray>? Nope. Can we somehow apply MeshData instances? Nope, AFAIK.

    - Problem 3: Now adding data streaming into the mix, in my main thread there are File IO or GC Alloc happening, so ideally we would need to interleave mesh generation jobs in between. Can we somehow wait for job handle to complete before applying MeshDataArray from jobs? Nope, AFAIK.
     
    pragmascript and awesomedata like this.
  10. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    208
    There's an internal method that allows you to do this, so you can call it via reflection. Of course, that's very much not ideal.
     
  11. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    341
    Agree on all of these.

    You might be able to deal with problem 2 by putting the individual MeshData structures in a NativeArray.

    Alternatively, I only have one MeshDataArray, since I happen to know the maximum mesh count I might need in advance, even if I won't use them all. So what I do is create a job for every MeshData in the array, add the job handle to a NativeList<JobHandle> and then use JobHandle.CombineDependencies to make the render code that applies and draws the meshes wait until all the generating jobs are complete, since you can't ApplyToMeshes or DrawMesh in jobs. It turns out those calls are quite inexpensive, though. Most of the work is the mesh generation and setting mesh data, as you point out, and those can run in parallel.

    So you could probably do the same thing, just with multiple MeshDataArrays. Or, if you have some idea what your upper limit might be, preallocate more than you need at the cost of some memory and use only what you need.

    All that said, I hope to see the Mesh API evolve to consider these issues.
     
    bitinn likes this.
  12. Unarmed1000

    Unarmed1000

    Joined:
    Sep 12, 2014
    Posts:
    20
    A bit late to the party but I didnt see anyone mention the strange fact that Mesh.subMeshCount can mess with your index buffer size.

    So you have this perfectly sized index buffer that fits your needs, then you call subMeshCount and suddenly it decides to change the index buffer size just because you now need less sub meshes.

    Seriously who thought that was a good idea?

    Does anyone have a good idea to workaround the issue?

    EDIT: the workaround is to only use SetSubMeshes as it also sets the subMeshCount without messing with the index buffer.
     
    Last edited: Nov 10, 2021
    a436t4ataf likes this.
  13. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    No, it's a terrible idea indeed. But such is life of public APIs: we made a mistake ten years ago, and now we can't ever fix the behavior, since there's code out there depending on subMeshCount working exactly like it does. The documentation of it explicitly has a note about this strange behavior. https://docs.unity3d.com/ScriptReference/Mesh-subMeshCount.html
     
    FM-Productions and Thaina like this.
  14. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,294
    It's always strange to me that Unity will use this as a reason to not fix something horrible in an API, like this, or linear EXR files being completely unusable because they get gamma corrected when they shouldn't, but will break all rendering every few months with the SRPs and make us all diff every file in the depot and figure it out with no documentation or warning.

    That seems, rather, inconsistent, to put it mildly.

    There's always code depending on something working "exactly like it does". With proper documentation and patch notes, it's acceptable to make changes to those APIs and update them to make sense or be fixed, and to force people to update that code.

    It seems like half of Unity runs under the idea that no changes to APIs can ever break things, and the other half seems to think breaking everything is what they get paid for. Both, IMO, are too extreme.
     
    TerraUnity, a436t4ataf and Jes28 like this.
  15. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    I don't disagree.

    The subMeshCount behavior is perhaps not "completely idiotic" though. Like, ok right now I would not do that behavior, but it also kinda makes sense for that behavior to happen in some cases. As long as it's pointed out in the documentation, I think it's okay-ish. And that's exactly why some months ago I've put the note in the documentation.

    The other issue you mentioned (EXR vs color spaces), I was under impression that someone was fixing that many months ago. But I could be imagining things. Can you remind of case # so that I can remember what exactly it's about and chase things up?

    As per the "SRPs keep changing things all the time without any notice or documentation", yeah I'm with you there. I would try to not do that if I were working on SRPs. But I'm not, so :/
     
    laurentlavigne and TerraUnity like this.
  16. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,294
    Case 1080483 is one I reported in 2018, though I'm pretty sure @bgolus reported it as well before then. Essentially if you're in Gamma rendering mode and you save a linear EXR texture, it gamma corrects it even though it's linear. This was closed because supposedly things were relying on this behavior, but I can't imagine what would possibly rely on that behavior, since every time you serialize and deserialize the data, even if you correct for it, you get different results.

    It's funny, large parts of MicroSplat had to be designed differently to work around this issue, and it's the reason every MicroSplat object needs a component on it. Without being able to serialize a 16bit linear texture reliably, I had to store things in a scriptable object and generate the data on demand. It makes MicroSplat heavier than I would like for object workflows.

    In newer projects I avoid this by using ScriptableAssetImporter to turn my files into actual textures at import time rather than having to stick a component on everything. But that only works for files created inside Unity.
     
  17. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
  18. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,647
    I have noticed that many (many) Unity teams don't seem to be aware of the ability to 'obsolete' API calls, and that this is supported right down to the compiler level. In other platforms (both engines and languages) that are 10+ years old we're accustomed to seeing vast amounts of obsoleted calls, partly because it's a way of cleaning up the codebase but - IME on the other side of the fence - *mostly* because it lets us put in compiler-enforced, IDE-supported, notes to current engineers downstream.

    e.g.: [Obsolete("This call had some inconsistencies, use [the other version] instead - it works identically but fixes the mistakes; if the other one works for your code you should use it from now on; this version will be removed in a future release", false]

    ... that in a later release gets that 'false' upgraded to a 'true'
    ... and then one release later the API call gets deleted. And no-one will complain! They've had auto compiler warnings about it for (at least) two releases, and auto compiler errors for a release.

    I've always thought it should go something like:

    Unity 2025.1.x / .2.x / .3.x : [Obsolete, false]
    Unity 2025.4.x LTS: [Obsolete, true]
    Unity 2026.1.1 : (DELETED)

    ... is something like that a possibility here?@Unarmed1000's observation that using the other API call avoids the (in hindsight) bad behaviour and as a developer the HUGE thing I'd want is some flag that 'you probably dont want to use his method, but you can if your code depends on it' -- i.e. an [Obsolete("..", false)] in the codebase would be perfect.

    EDIT: and for the record: I do this in my assets, to great effect! When someone is dependent upon a call I think was badly designed (I f***ed-up when I wrote it originally) - and if they actually still need that behaviour - I get contacted when they notice the new Warnings. More often: they see the warning, read the note, look at their code, realise that their own code would be cleaner NOT being dpeendent on this odd behaviour, clean up their code, save, and problem solved.
     
    MUGIK likes this.
  19. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,770
    They most definitely are. In all our current Editor + Engine (not counting packages) public APIs, I see 1886 Obsolete attributes right now :)
     
    a436t4ataf likes this.
  20. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,647
    That's great - I must be using the wrong bits of Unity :D because I rarely see them (have submitted some bug reports suggesting/requesting them in the past, where I've seen particularly gnarly cases which are crying out for them).

    Anyway ... a possible solution/workaround for the Mesh issues here?
     
  21. pragmascript

    pragmascript

    Joined:
    Dec 31, 2010
    Posts:
    101
    Let's say I have a procedural mesh where I want to add a handfull of triangles every frame, does this new Mesh API help with that at all or do I have to make a completeley new Mesh every frame where 99% of the vertices are copied from the old mesh?
     
  22. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,313
    That's less an API issue and more a data management understanding issue. Whether on GPU or CPU, if you want to add more elements to an array, a new array of the larger size has to be made to contain it. In the case of a list, it has a usually bigger sized hidden array inside to keep filling with elements, but when it reaches list capacity, it too must create a larger array and copy data over

    You can do similar on the GPU. Make a buffer large enough to allow growth, and increase a counter as you add verts.
    You can then use the indirectdrawprocedural methods to render the mesh, using that current vert count, copied to your "indirect arguments" buffer for it using the compute.CopyCount() function.
     
  23. pragmascript

    pragmascript

    Joined:
    Dec 31, 2010
    Posts:
    101
    Yes I'm aware that my initial buffer would need to be big enough to hold the data, but that's not really usefull when the API neither supports partial updates nor let's me retain writeable arrays after a call to ApplyAndDisposeWritableMeshData.

    Yes, I probably will have to do that instead of doing it CPU side, since the API doesn't seem to support that usecase.



     
  24. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    724
    I'd say it's an API issue. The Mesh class index buffer (GraphicsBuffer) could support dynamic index arrays (Append/Consume), but it does not. This means that every time you have a mesh with varying index count you need to write your own renderer + shader relying on DrawProceduralIndirect (like you propose above). When you work a lot with custom particle systems, marching cubes and the like, this is a huge annoyance.
     
    pragmascript likes this.
  25. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    567
    Seems there is a bug with the mesh API.

    When I assign BlendWeight and BlendIndices vertex attributes, the mesh position and normal attributes automatically change from float16 x 4 to float32 x 3.

    It's very odd and makes no sense. Is there a known workaround?

    The images below show the difference between assigning the vertex attributes and not doing so.

    @tteneder can you help or do you know who can?
     

    Attached Files:

  26. Kichang-Kim

    Kichang-Kim

    Joined:
    Oct 19, 2010
    Posts:
    859
    It is by design. See this:
    https://forum.unity.com/threads/cas...float32-unintentionally.1051406/#post-6825881
     
    joshuacwilde likes this.
  27. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    567
  28. Kichang-Kim

    Kichang-Kim

    Joined:
    Oct 19, 2010
    Posts:
    859
    Unfortunately, mobile has affected this too.
     
  29. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    567
    Thanks, yes I tested and that appears to be the case.

    Any unity devs that can weigh in on why this is the case? It seems odd that on mobile (where FP16 makes the biggest difference) is where FP16 skinning is not supported. Would certainly help improve memory for us since we have a lot of unique skinned meshes.
     
  30. kenamis

    kenamis

    Joined:
    Feb 5, 2015
    Posts:
    373
    wouldn't the hardware simply drop the extra precision so you're not paying any price computationally? The only price you'd be paying is a little more memory usage, but you're not saving half memory because you're going from 4 dimensions to 3 dimensions (at twice byte size)?
     
  31. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    567
    All mobile devices support f32. So it is required to use f16/half to get lower GPU cost (for GPU skinned meshes). Also the memory decrease is still substantial as we have a significant amount of high poly (for mobile) skinned meshes.
     
  32. Jes28

    Jes28

    Joined:
    Sep 3, 2012
    Posts:
    915
    Slowest part of our devices (all devices Mobile, PC, Consoles...) is Memory

    So most ways to optimize your game is Make You Memory Access patterns to be as optimal as it can be
    DOTS exists exactly because of this.

    Memory Bandwidth is one of parameter you care about when try to optimize game on every platform.
    Making you data as small as possible is known optimization technique. It is common to make part of rendering data be computed in shader instead of passing it in mesh or texture in ready state.
    Some of it:
    - Computing Bintangent when skinning
    - Compressing Normals in GBuffer
    - Arm Mali Compressed Back Buffer format
    - ...

    So making input mesh smaller will definitely increase performance of rendering because of better memory cache usage
     
  33. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    655
    This is a slight side note but it feels like the StaticBatchingUtility.Combine API could do with a similar update.

    I'm about to use it to run a quick test on something and its pretty shocking that the only inputs are either a GO or an array of GO.

    Feels like it could do with some updates for things like:

    Passing lists to prevent garbage with .ToArray
    Passing an array but defining a length of what to use from it
    Passing whatever this needs directly instead of just GO to save some lookups (E.G MeshFilter or Mesh, not sure which it tracks)
     
  34. MUGIK

    MUGIK

    Joined:
    Jul 2, 2015
    Posts:
    395
    Hi
    Hope this is still a relevant place for sharing feedback.
    I'm making mesh splitting utility and trying to squeeze maximum from Jobs, Burst, and the new Mesh API. Unity 2020.3.19f1.

    1. To use WritableMeshData we need to know the exact VertexData layout, but this is not always possible. I would love to have some API that allows me to do smth like this:
    Code (CSharp):
    1. var vertexAttributeDescriptorList = new List<VertexAttributeDescriptor>();
    2.  
    3. vertexAttributeDescriptorList.Add(new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3));
    4.  
    5. // some conditional attributes
    6. if (hasNormals)
    7.     vertexAttributeDescriptorList.Add(new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3));
    8.  
    9. if (hasUv0)
    10.     vertexAttributeDescriptorList.Add(new VertexAttributeDescriptor(VertexAttribute.TexCoord0, VertexAttributeFormat.Float32, 2))
    11. // etc...
    12.  
    13. Mesh.MeshData meshData = // writable mesh data
    14. meshData.SetVertexBufferParams(length, vertexAttributeDescriptorList.ToArray());
    15.  
    16. var writer = meshData.GetVertexDataWriter(
    17.     validateDimension: true, // validate input provided into writer with AppendXXX methods
    18.     validateOrder: true, // validate the correct order of provided data
    19. );
    20.  
    21. // We know exact attributes order which means we can write only necessary data
    22. for (var i = 0; i < length; i++)
    23. {
    24.     writer.AppendPosition(vertices[i]); // there are overloads that accept float3, float4, half3, half4, etc
    25.  
    26.     if (hasNormals)
    27.         writer.AppendNormal(normals[i]);
    28.  
    29.     if (hasUv0)
    30.         writer.AppendTexcoord0(uv0[i]);
    31. }
    32.  
    33. // So instead of NativeArray<> with explicit VertexData struct we can use this quite convenient writer imho.
    34. // After all, we just write bits into memory, why do I need to create VertexData struct for each combination of attributes?
    35. // VertexData_Pos, VertexData_PosUv0, VertexData_PosNormUv0, VertexData_PosColorUv0, VertexData_PosNormColorUv0... This gets complicated pretty quickly.
    36. // And I need to create explicit jobs for each VertexData_XXX type.
    37.  
    2. Because of the problem [1] I need to create explicit attribute arrays. I don't want to recreate them each time, but I'm also not able to dispose static readonly. What should I do?
    Code (CSharp):
    1. public static class PosNormUv0
    2. {
    3.     public static readonly NativeArray<VertexAttributeDescriptor> Attributes = // how to dispose this?
    4.         new NativeArray<VertexAttributeDescriptor>(new[]
    5.         {
    6.             new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3),
    7.             new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3),
    8.             new VertexAttributeDescriptor(VertexAttribute.TexCoord0, VertexAttributeFormat.Float32, 2),
    9.         }, Allocator.Persistent);
    10.  
    11.     [StructLayout(LayoutKind.Sequential)]
    12.     private struct VertexData
    13.     {
    14.         public float3 position;
    15.         public float3 normal;
    16.         public float2 uv0;
    17.     }
    18.  
    19.     // jobs for this specific PosNormUv0.VertexData
    20.     // Job needs the Attributes array to setup mesh, that's why I can't use managed array VertexAttributeDescriptor[]
    21. }

    3. It seems like a bug, but calling
    Code (CSharp):
    1.  
    2. Mesh.MeshData.SetSubMesh(0, new SubMeshDescriptor(0, length, MeshTopology.Triangles), MeshUpdateFlags.Default);
    3.  
    inside IJob doesn't actually recalculate bounds. Docs says:
    The bounds, firstVertex and vertexCount values are calculated automatically by Mesh.SetSubMesh, unless MeshUpdateFlags.DontRecalculateBounds flag is passed.
    Docs: https://docs.unity3d.com/2020.1/Doc...rence/Rendering.SubMeshDescriptor-bounds.html
    Docs for Mesh.MeshData.SetSubMesh just refers to Mesh.SetSubMesh.
    Explicitly settings bounds in SubMeshDescriptor also doesn't work.
     
    pragmascript likes this.
unityunity