Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Pass additional vertex data to the vertex shader without modifying mesh asset

Discussion in 'Shaders' started by Seneral, Aug 22, 2015.

  1. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    Hi there,
    I need to pass in float3's to each vertex in the vertex shader, so I probably have to take the vertex colors. But I can't bake them into the mesh: I have a base mesh and I need to pass in data for each instance of the mesh sperately. How can this be done efficiently?

    Now if I just change the vertex colors of an instance at runtime, it will change the underlying asset. but when I copy it beforehand, it will clutter up my GPU ram with all those instances of mesh data! How can I make it share the GPU ram space among the instances and only have the vertex colors different?

    If that doesn't work efficiently/at all, I'd use the second UV baked into the mesh to lookup the data from a seperate texture. It's a hack but works somehow I guess.

    Any ideas?
     
    Last edited: Aug 22, 2015
  2. Plutoman

    Plutoman

    Joined:
    May 24, 2013
    Posts:
    257
    frankfringe likes this.
  3. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    Yes, that should be exactly what I need! Thank you so much:)
     
  4. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    Yep, works:)
    First, I created a new Mesh, set vertices only to the vertices of the base mesh (so colors can be assigned and arrays are of the same size) then I set colors to whatever I want to sent. Finally, I upload the mesh and set the additionalvertexStreams of the MeshRenderer instance. Then, my shader can access the vertex colors I specified there without having too much duplicated data on the GPU ram or hard drive:)
    Thanks a bunch!

    EDIT: I did not mention I also want to use it on SkinnedMeshRenderer, which does not have that option:(
     
    Last edited: Aug 22, 2015
    frankfringe likes this.
  5. Plutoman

    Plutoman

    Joined:
    May 24, 2013
    Posts:
    257
    That might be rougher, then... I would imagine there's likely not much you can do. Are you targeting mobile, I assume? Because desktop would not likely have much issues on VRAM, meshes take up significantly less than textures.
     
    Seneral likes this.
  6. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    No, I'm not specifically targeting mobile. But my purpose is to have characters and NPCs in the game with different "body specs". That means potentially up to 20 ~10k poly skinned meshes at the same time... It's for generic purpose so I'd like to spare as much as I can. For now I go with duplicating the mesh and working on the other tech, but I'm still looking for a better solution:)
     
  7. Plutoman

    Plutoman

    Joined:
    May 24, 2013
    Posts:
    257
    An alternative to the textures is to look at structured buffers;

    http://scrawkblog.com/2014/07/02/directcompute-tutorial-for-unity-buffers/

    It's a DX11 process, but you could create a buffer for each object with specifically the float3 inputs. I haven't exactly thought through the process of how you would align and retrieve data from the buffer from the specific indices, but I'm quite sure there would be ways to handle that somehow. Just throwing out various ideas that you could work with. You'd have to render all of them separately, obviously, with per-renderer buffers.

    Or, you could use that second UV baked into the mesh to look up from the buffer. Use them as floats that cast/round to an int, and read from it appropriately the same way you would read from a flattened 2d array.

    It limits target platforms, but also would open up the possibility of easily changing these buffers with compute shaders. Otherwise, the texture approach is the best way I can think of. It's a bit hackish for sure, but it would work easily enough.
     
    Seneral likes this.
  8. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    I took a look at ComputeBuffers some time ago for a different project, but my problem was to get the vertex index to retrieve the data, as you said. But in the second code example from the site you posted, the vertex shader takes an SV_VertexID, which might be the solution:) Unfortunately, MaterialPropertyBlock does not support buffers, but there is a function Material.SetBuffer so I'll have to duplicate a base material for each mesh instance.
    I'll try that, thanks for your help!
     
  9. Plutoman

    Plutoman

    Joined:
    May 24, 2013
    Posts:
    257
    Yeah, this actually brought to mind an option for my terrain objects, I've struggled to think of how to apply more data to the objects, but I could very easily set up a buffer read for my own terrains and shaders will become a little easier to read than trying to remember which UV channel goes to which data.

    I think the approach I would use is SetBuffer, + SetInt for the buffer width. In the UV channel, assign each vertex an ID that you use for the buffer, and access it by round(uv.x) * _Width + round(uv.y).

    Hope that works for ya! No problem at all.
     
  10. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    Yes should work I guess. But remember, you don't need to sacrifice a uv channel at all, you can use a way cheaper and cleaner method: In the second code example, he takes an id with semantic SV_VertexID which I suppose retüns the vertx index in the mesh array:) That should do it better...

    And for the uv, you would have to access it like that:
    round(uv.x*n) * n + round(uv.y*n)
    where n is a fixed value being the count of the vertices in each uv row/grid n >= sqrt (vertexCount)
    and the uv has to be a square grid of each vertice.
    Say, an object has vertice count of 86 the uv would be a 10x10 grid with 9th line only 6 vertices and 10th empty.
    n would be 10, obviously
    Requirement is, that the top- and leftmost vertice starts at (0,0) and each row/line inrements with 1/n

    But the vertexID approach is way cleaner;)
     
    Last edited: Aug 24, 2015
  11. Plutoman

    Plutoman

    Joined:
    May 24, 2013
    Posts:
    257
    That would not strictly be necessary, as UV values don't need to be aligned in the 0-1 range, they can be assigned as floats. What I got momentarily mixed up on was the VertexID, I didn't know what it corresponded to - however, if it's just literally the indice into the vertex array... then by far, that's the simplest and easiest solution! No math, so you can add whatever data you want at the cost of a single buffer read per vertex.

    Either way, good way to extend off per mesh data that I hadn't considered until you asked this question.
     
  12. NavyFish

    NavyFish

    Joined:
    Aug 16, 2013
    Posts:
    28
    This will work (I'm doing just that for a procedural planet), but it will cause a separate batch for each terrain patch due to them having different materials - even if the only difference between the two materials is a pointer to a different compute buffer.

    Please support my feedback request to have MaterialPropertyBlock support a "SetBuffer()" property by voting for it. If you find any work-arounds in the mean-time, let me know!

    Also, if anyone in this thread would like an example of how to generate vertex positions in a ComputeShader, then draw those using a (single) Mesh object without having to change any vertex data on that mesh asset, let me know and I can paste the relevant snippets from my own project.

    -Navy
     
  13. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    Cool, can you post a link please?

    That'd be great, especially if it can be applied to standard vert/frag shader, too:)
     
  14. NavyFish

    NavyFish

    Joined:
    Aug 16, 2013
    Posts:
    28
    Whoops! Thought I had posted the link, here it is: http://feedback.unity3d.com/suggestions/implement-materialpropertyblock-dot-setbuffer

    Here's a quick summary of the technique, pasted from my response to another thread. And yes, it integrates into the standard rendering pipelines (including PBR). I will try to post a code snippet with more detail tomorrow or this weekend - things are a bit busy at the moment, and I'll need some time to get back into this particular code project as it's been on hold for a couple of months.

     
    Last edited: Sep 24, 2015
  15. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    I spended all votes I could spare (6) for it. Limiting to 10 votes is so useless, when there are dozens of items that really NEED to be voted for;)

    What do you mean with pre-pass vertex shader? Just normal vertex shader, or is it something special that maybe even supports correct depth writing? Sounds promising to me, because that's also one of the problems I have currently. I'm making some erosion algorithm where the terrain meshes (255x255) are offsetted by a height map in the vertex shader, but when I want to add some depth fog in the water above (same model, offsetted plane), it does not write the depth:
    Screenshot1.jpg
    Depth is debugged here, as you can see the depth of the initial plane is used:(
     
  16. NavyFish

    NavyFish

    Joined:
    Aug 16, 2013
    Posts:
    28
    Awesome, thanks. I don't think the code required to implement the feature will be much at all since it's fairly low-level, so fingers crossed we are it sometime soon!

    By 'pre-pass' i rather mean performing vertex modification within a surface Shader. For examples of that, check here: http://docs.unity3d.com/Manual/SL-SurfaceShaderExamples.html , starting at the example titled
    'Normal Extrusion with Vertex Modifier'... There's also a vertex fog example there.
     
  17. Seneral

    Seneral

    Joined:
    Jun 2, 2014
    Posts:
    1,206
    Ok, that's what I'm actually doing there which gives me that depth problem.

    I'm looking over some techniques to write to the depth buffer in the fragment program, which should give me the desired results...
     
  18. NavyFish

    NavyFish

    Joined:
    Aug 16, 2013
    Posts:
    28
    I think there were a few quirks to getting it working right. Like I said, been awhile, and I'm not at home now so don't have access to the project. But if I remember correctly, the modified surface shader wouldn't execute unless attached to a camera. Not sure if that's relevant to your issue. Just know that it is possible! Good luck, will post back when I can access the code.
     
unityunity