Search Unity

Feedback Wanted: Mesh scripting API improvements

Discussion in 'Graphics Experimental Previews' started by Aras, May 26, 2019.

  1. MUGIK

    MUGIK

    Joined:
    Jul 2, 2015
    Posts:
    325
    +1 for Mesh.ApplyWritableMeshData method.
    I don't have numbers on how long it takes to call Mesh.AllocateWritableMeshData, but I'm assuming it costs a bit.
    And the ability to reuse mesh data for things that changes every frame would also be really nice!
     
    awesomedata likes this.
  2. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    169
    Here's all the code I currently use to implement non-disposing ApplyMeshData methods (plus a way to slice MeshDataArrays) https://gist.github.com/DaZombieKiller/b42e5847d650781a2dadc64695504f95

    Note that it makes use of System.Runtime.CompilerServices.Unsafe, so you'll need that in your project.
     
    MUGIK likes this.
  3. SirIntruder

    SirIntruder

    Joined:
    Aug 16, 2013
    Posts:
    46
    Code (CSharp):
    1.  
    2.                 meshDataArray = Mesh.AllocateWritableMeshData(1);
    3.                 var meshData = meshDataArray[0];
    4.                 meshData.SetVertexBufferParams(vertexCount, TerrainVertex.VertexAttributeDescriptors);
    5.                 meshData.SetIndexBufferParams(indexCount, IndexFormat.UInt32);
    6.                 meshData.subMeshCount = 1;
    7.  
    8.                 var vertexBuffer = meshData.GetVertexData<TerrainVertex>();
    9.                 var indexBuffer = meshData.GetIndexData<int>();
    10.  
    11.                // schedule job that writes to vertexBuffer
    12.                // schedule job that writes to indexBuffer
    13.  
    When I try this, I get errors on the scheduling of the second job, saying:

    Code (CSharp):
    1. InvalidOperationException: The previously scheduled job VertexBufferJob writes to the UNKNOWN_OBJECT_TYPE VertexBufferJob .vertices. You must call JobHandle.Complete() on the job VertexBufferJob, before you can read from the UNKNOWN_OBJECT_TYPE safely.
    This seems like a bug to me - why shouldn't I be able to write to index and vertex buffer in parallel?

    I tried to have indexJob take dependency on vertexJob, but it doesn't fix the error - only solution is to run those jobs hard-sequential. edit -
    NativeDisableContainerSafetyRestriction also avoids the error, without causing any visible issues.


    Additional note - I did this previously with manual allocation of native arrays, which were then set to mesh using SetVertex/IndexData() - my understanding is that this approach would save me one extra copy of the entire meshData.
     
    Last edited: May 30, 2021
  4. RecursiveEclipse

    RecursiveEclipse

    Joined:
    Sep 6, 2018
    Posts:
    298
    Is there no good way to dispose of MeshDataArray without introducing a sync point? MeshDataArray can't be disposed using .WithDisposeOnCompletion in an Entities.ForEach, Job.WithCode, or [DeallocateOnJobCompletion], and it doesn't even have a Dispose method with a JobHandle parameter.

    Is there a good practice/workaround here? Or is there a technical reason? This is my biggest gripe with this API, fast and DOTS friendly but you can also take a significant hit.
     
    MUGIK likes this.
  5. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    325
    +1 for Mesh.ApplyWritableMeshData :)

    I think a common situation is generating a bunch of meshes in jobs without knowing exactly how big they will be.

    Right now, I'm handling this by appending triangles to several vertex NativeLists in jobs and then once they're all done, calling Mesh.SetVertexBufferParams, Mesh.SetVertexBufferData, Mesh.SetIndexBufferParams, Mesh.SetIndexBufferData, Mesh.SetSubMesh and finally Graphics.DrawMesh all on the main thread for each of these vertex lists. The nice thing is that the lists can be reused every frame and don't need to be deallocated.

    It seems like this API would allow for moving some of those calls into jobs. It does seem like you have to call SetVertexBufferParams twice: once to preallocate some maximum number of vertices and then again to set the actual number of vertices once you're done. I'm not sure what kind of performance hit that might have, or even if that's safe to do. It would be nice to be able to avoid that. And then since you have to call Mesh.ApplyWritableMeshData on the main thread, is it actually any faster or is it then just doing a bunch of copying and it's basically the same as before?

    While we're hand-wavy wishlisting ;), it sure would be nice to be able to queue up the renders in jobs too. I know Mesh and Material can't go in jobs currently, but it would be nice to have some kind of job-compatible versions or handle or something so we can do the equivalent of Mesh.ApplyWritableMeshData and Graphics.DrawMesh in a job and not have to stall to call it a bunch of times sequentially on the main thread.
     
    awesomedata likes this.
  6. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    882
    I think it's worth clarifying which API is more performant with which usage, now that we have API that support:

    - Mesh.SetVertices that can read a slice of NativeArray
    - Mesh.SetVertexBufferData that do more or less the same.
    - Mesh.ApplyAndDisposeWritableMeshData that can apply the same to multiple meshes (or merge them into a single mesh).

    Say we are procedurally generating a mesh, it's not clear why using the 3rd option, aka the MeshData API, is better for performance. As the allocation is going to happen no matter what we do.

    In this case, creating NativeArray manually then write them through Mesh.SetVertexBufferData appear faster, because we can pre-allocate a certain amount of vertices and reuse that array through slices. While Mesh.ApplyAndDisposeWritableMeshData kills this intermediate storage.

    Let's say we expand above example into generating multiple meshes in parallel, it is also not clear if Mesh.AllocateWritableMeshData is more performant than manual NativeArray approach. The main reason is we probably don't know exactly how many meshes we will generate, it's hard to tell if allocating a lot of them in advanced is a good strategy.

    In short, MeshData seems very good for fast access, but for write, I haven't fully understood the benefit.
     
    awesomedata and MUGIK like this.
  7. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    325
    Having got a bit of experience with this the last few days, I think I can answer this.

    The benefit is, if you're generating multiple meshes in jobs, you can move much of the setup and copying of mesh data into jobs too, thus taking it off the main thread.

    It's fairly minor. I went from sequentially doing it all on the main thread in about 0.3 ms to moving that all to jobs and then only doing the final draw call on the main thread in 0.065 ms. The time didn't disappear, it just got spread out over multiple threads. But I also don't have to stall for all the generating jobs to finish before I can work on applying the mesh data. So it's not nothing.

    Now, I'm still using jobs to build all the procedural mesh data in NativeLists first, and then just copying the results. It's essentially the same as Mesh.SetVertexBufferData, but just spread out in jobs:

    Code (CSharp):
    1. MeshData.SetVertexBufferParams( vertexCount, SurfaceVertex.Layout );
    2. MeshData.SetIndexBufferParams( vertexCount, IndexFormat.UInt16 );
    3. NativeArray<SurfaceVertex>.Copy( Vertices, 0, MeshData.GetVertexData<SurfaceVertex>(), 0, vertexCount );
    4. NativeArray<ushort>.Copy( Indices, 0, MeshData.GetIndexData<ushort>(), 0, vertexCount );
    5. MeshData.subMeshCount = 1;
    6. MeshData.SetSubMesh( 0, new SubMeshDescriptor( 0, vertexCount ), meshUpdateFlags );
    You can get around the repeated allocation/dispose with using reflection, like someone posted earlier—though I'm not 100% sure on the safety and I'm still debugging some graphical glitching I get occasionally. Hopefully, Unity gives us an official Mesh.ApplyWritableMeshData call. But that's a somewhat minor performance difference.

    So TLDR, it's an improvement, but not necessarily game changing. Mileage may vary. Probably it opens the door to other improvements later.

    It would be nice if instead of building in NativeLists, there was an official way to add to MeshData iteratively, so we could save the buffer memory and copy time and squeeze out a little more performance that way.
     
    awesomedata and bitinn like this.
  8. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    882
    Thx for the write up, you clear up a key confusion here, the cost of these 2 approaches:

    - Mesh.AllocateWritableMeshData()
    - MeshData.SetVertexBufferParams()

    vs

    - Mesh.SetVertexBufferData()

    Bases on your benchmark, while there are allocations on main thread with Mesh.AllocateWritableMeshData, the penalty is worth the tradeoff because now we can allocate the majority of vertices and index buffer in jobs, that in turn requires less sync point in main thread.
     
  9. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    882
    Now that I am using MeshDataArray & Mesh.ApplyAndDisposeWritableMeshData() more extensively, I start to see their problems too.

    - Problem 1: In procedural generation we generally don't know how many meshes we need beforehand (think LOD and Culling). The lack of ways to allocate a larger MeshDataArray and then slice them before apply is very problematic.

    - Problem 2: MeshData related jobs are generally the longer running jobs, and we would like to run them in parallel as much as possible, but because of Problem 1 we end up needing to apply multiple MeshDataArray. Can you put them in a NativeArray<MeshDataArray>? Nope. Can we somehow apply MeshData instances? Nope, AFAIK.

    - Problem 3: Now adding data streaming into the mix, in my main thread there are File IO or GC Alloc happening, so ideally we would need to interleave mesh generation jobs in between. Can we somehow wait for job handle to complete before applying MeshDataArray from jobs? Nope, AFAIK.
     
    awesomedata likes this.
  10. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    169
    There's an internal method that allows you to do this, so you can call it via reflection. Of course, that's very much not ideal.
     
  11. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    325
    Agree on all of these.

    You might be able to deal with problem 2 by putting the individual MeshData structures in a NativeArray.

    Alternatively, I only have one MeshDataArray, since I happen to know the maximum mesh count I might need in advance, even if I won't use them all. So what I do is create a job for every MeshData in the array, add the job handle to a NativeList<JobHandle> and then use JobHandle.CombineDependencies to make the render code that applies and draws the meshes wait until all the generating jobs are complete, since you can't ApplyToMeshes or DrawMesh in jobs. It turns out those calls are quite inexpensive, though. Most of the work is the mesh generation and setting mesh data, as you point out, and those can run in parallel.

    So you could probably do the same thing, just with multiple MeshDataArrays. Or, if you have some idea what your upper limit might be, preallocate more than you need at the cost of some memory and use only what you need.

    All that said, I hope to see the Mesh API evolve to consider these issues.
     
    bitinn likes this.
  12. Unarmed1000

    Unarmed1000

    Joined:
    Sep 12, 2014
    Posts:
    20
    A bit late to the party but I didnt see anyone mention the strange fact that Mesh.subMeshCount can mess with your index buffer size.

    So you have this perfectly sized index buffer that fits your needs, then you call subMeshCount and suddenly it decides to change the index buffer size just because you now need less sub meshes.

    Seriously who thought that was a good idea?

    Does anyone have a good idea to workaround the issue?

    EDIT: the workaround is to only use SetSubMeshes as it also sets the subMeshCount without messing with the index buffer.
     
    Last edited: Nov 10, 2021
    a436t4ataf likes this.
  13. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,764
    No, it's a terrible idea indeed. But such is life of public APIs: we made a mistake ten years ago, and now we can't ever fix the behavior, since there's code out there depending on subMeshCount working exactly like it does. The documentation of it explicitly has a note about this strange behavior. https://docs.unity3d.com/ScriptReference/Mesh-subMeshCount.html
     
    Thaina likes this.
  14. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,128
    It's always strange to me that Unity will use this as a reason to not fix something horrible in an API, like this, or linear EXR files being completely unusable because they get gamma corrected when they shouldn't, but will break all rendering every few months with the SRPs and make us all diff every file in the depot and figure it out with no documentation or warning.

    That seems, rather, inconsistent, to put it mildly.

    There's always code depending on something working "exactly like it does". With proper documentation and patch notes, it's acceptable to make changes to those APIs and update them to make sense or be fixed, and to force people to update that code.

    It seems like half of Unity runs under the idea that no changes to APIs can ever break things, and the other half seems to think breaking everything is what they get paid for. Both, IMO, are too extreme.
     
    TerraUnity, a436t4ataf and Jes28 like this.
  15. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,764
    I don't disagree.

    The subMeshCount behavior is perhaps not "completely idiotic" though. Like, ok right now I would not do that behavior, but it also kinda makes sense for that behavior to happen in some cases. As long as it's pointed out in the documentation, I think it's okay-ish. And that's exactly why some months ago I've put the note in the documentation.

    The other issue you mentioned (EXR vs color spaces), I was under impression that someone was fixing that many months ago. But I could be imagining things. Can you remind of case # so that I can remember what exactly it's about and chase things up?

    As per the "SRPs keep changing things all the time without any notice or documentation", yeah I'm with you there. I would try to not do that if I were working on SRPs. But I'm not, so :/
     
    TerraUnity likes this.
  16. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,128
    Case 1080483 is one I reported in 2018, though I'm pretty sure @bgolus reported it as well before then. Essentially if you're in Gamma rendering mode and you save a linear EXR texture, it gamma corrects it even though it's linear. This was closed because supposedly things were relying on this behavior, but I can't imagine what would possibly rely on that behavior, since every time you serialize and deserialize the data, even if you correct for it, you get different results.

    It's funny, large parts of MicroSplat had to be designed differently to work around this issue, and it's the reason every MicroSplat object needs a component on it. Without being able to serialize a 16bit linear texture reliably, I had to store things in a scriptable object and generate the data on demand. It makes MicroSplat heavier than I would like for object workflows.

    In newer projects I avoid this by using ScriptableAssetImporter to turn my files into actual textures at import time rather than having to stick a component on everything. But that only works for files created inside Unity.
     
  17. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,764
  18. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,334
    I have noticed that many (many) Unity teams don't seem to be aware of the ability to 'obsolete' API calls, and that this is supported right down to the compiler level. In other platforms (both engines and languages) that are 10+ years old we're accustomed to seeing vast amounts of obsoleted calls, partly because it's a way of cleaning up the codebase but - IME on the other side of the fence - *mostly* because it lets us put in compiler-enforced, IDE-supported, notes to current engineers downstream.

    e.g.: [Obsolete("This call had some inconsistencies, use [the other version] instead - it works identically but fixes the mistakes; if the other one works for your code you should use it from now on; this version will be removed in a future release", false]

    ... that in a later release gets that 'false' upgraded to a 'true'
    ... and then one release later the API call gets deleted. And no-one will complain! They've had auto compiler warnings about it for (at least) two releases, and auto compiler errors for a release.

    I've always thought it should go something like:

    Unity 2025.1.x / .2.x / .3.x : [Obsolete, false]
    Unity 2025.4.x LTS: [Obsolete, true]
    Unity 2026.1.1 : (DELETED)

    ... is something like that a possibility here?@Unarmed1000's observation that using the other API call avoids the (in hindsight) bad behaviour and as a developer the HUGE thing I'd want is some flag that 'you probably dont want to use his method, but you can if your code depends on it' -- i.e. an [Obsolete("..", false)] in the codebase would be perfect.

    EDIT: and for the record: I do this in my assets, to great effect! When someone is dependent upon a call I think was badly designed (I f***ed-up when I wrote it originally) - and if they actually still need that behaviour - I get contacted when they notice the new Warnings. More often: they see the warning, read the note, look at their code, realise that their own code would be cleaner NOT being dpeendent on this odd behaviour, clean up their code, save, and problem solved.
     
    MUGIK likes this.
  19. Aras

    Aras

    Unity Technologies

    Joined:
    Nov 7, 2005
    Posts:
    4,764
    They most definitely are. In all our current Editor + Engine (not counting packages) public APIs, I see 1886 Obsolete attributes right now :)
     
    a436t4ataf likes this.
  20. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,334
    That's great - I must be using the wrong bits of Unity :D because I rarely see them (have submitted some bug reports suggesting/requesting them in the past, where I've seen particularly gnarly cases which are crying out for them).

    Anyway ... a possible solution/workaround for the Mesh issues here?
     
unityunity