Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice
  2. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  3. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Official Feedback Wanted: Streaming Virtual Texturing

Discussion in 'Graphics Experimental Previews' started by AljoshaD, Mar 18, 2020.

  1. flyer19

    flyer19

    Joined:
    Aug 26, 2016
    Posts:
    121
    :),3q~,but this demo player setting had enable Virtual Texturing , not work.
     

    Attached Files:

  2. flyer19

    flyer19

    Joined:
    Aug 26, 2016
    Posts:
    121
    OK, need hdpr 9.0.0-preview.54
     
  3. flyer19

    flyer19

    Joined:
    Aug 26, 2016
    Posts:
    121
    vt2.png vt3.jpg VT1.png
    comfused with virtual texture momery still very big.and some time can not show the virtual texture profile.
     
  4. flyer19

    flyer19

    Joined:
    Aug 26, 2016
    Posts:
    121
    any help?It seems vt not work in 2020.2.0.a19
     
  5. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Hi flyer19,
    in 2020.2 you should use HDRP 10 (and not 9-preview). We do have some small bugs with the profiler that should be fixed soon. Why do you think VT is not working? The screenshots show the VT debug lines so it looks like it is working. I'm not sure why you see 1GB of render textures, that might just be the editor. Do you see this in the standalone player? I don't think this has anything to do with the VT system.
     
  6. flyer19

    flyer19

    Joined:
    Aug 26, 2016
    Posts:
    121
    confused with the rendertexture got 1g memory in editor mode .when make the level ,the momery will be crazy big too.vt seems can work,but total texture memory > > vt's 384m .Thanks for your reply ~,and when will get HDRP 10 .
     
  7. TheOtherMonarch

    TheOtherMonarch

    Joined:
    Jul 28, 2012
    Posts:
    862
    We are developing an RTS game where you will only view a small segment of terrain at a time. I would think that depending on camera angle VT would allow higher detail non-tiled terrain textures.
     
  8. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    It could make sense indeed. Mostly for the material mask which is unique everywhere. It could make sense for your tiling textures if you don't blend many, these are high resolution, and these aren't always completely visible.
     
  9. TheOtherMonarch

    TheOtherMonarch

    Joined:
    Jul 28, 2012
    Posts:
    862
    We would not be tiling / blend textures. Ideally just one big custom ground texture.
     
  10. ChezDoodles

    ChezDoodles

    Joined:
    Sep 13, 2007
    Posts:
    107
    Any update on the ability to use the Parallax Occlusion Mapping node in ShaderGraph with Virtual Textures?
     
  11. DrSeltsam

    DrSeltsam

    Joined:
    Jul 24, 2019
    Posts:
    100
    The documentation states that Virtual Textures do not support AssetBundles. Does that mean objects loaded from bundles cannot use virtual textures at the moment? Is there a rough ETA for asset bundle support?

    Another question: Linux does not show up in the "Supported Platforms" section, although Vulkan is supported. Is Linux really not supported, or is it just missing in the docs?
     
  12. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Materials in an assetbundle cannot use VT at the moment. The VT system will evaluate all referenced materials during the player build and generate the streaming data. Materials that are in assetbundles are not referenced and will not be detected. Therefore, your build will not contain the streaming texture data. We cannot store VT streaming data in asset bundles at the moment. It's our top priority at the moment to add this support for assetbundles. Our goal is to add it by the end of next year. It's a major development task though so no guarantees yet.

    Linux is indeed supported in 2020.2.
     
    JoNax97 likes this.
  13. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    A question that has come up, you can now use Streaming Virtual Texturing to stream a heightmap for displacement mapping in Unity 2020.2 with HDRP 10.

    In the following Shader Graph, there is 1 VT property that stores the heightmap in the 4th layer. You need to add a second VT sample that has "Automatic Streaming" disabled and Lod Mode set to "Lod Level". This allows you to sample the VT property in the vertex shader to offset the vertex position.
    DisplacementMappingWithStreamingVirtualTexturing.jpg DisplacementMappingWithStreamingVirtualTexturing2.jpg
     
  14. Hobodi

    Hobodi

    Joined:
    Dec 30, 2017
    Posts:
    101
    Lars-Steenhoff likes this.
  15. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Is there already a solution for objects behind transparent objects (windows etc.)? Something like a layer mask to ignore transparent objects maybe? Currently the system isn't able to handle the fetching of textures behind transparent objects, which really is a big problem for production use.
     
    rz_0lento likes this.
  16. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    There is no solution yet for transparent objects. It's on our short term roadmap though.
     
    m0nsky and Onat-H like this.
  17. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    We'll have presentation on Streaming Virtual Texturing during the virtual Unite conference next week. You can find more info here.
    StreamingVT-Unite-Session.png
     
  18. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Thanks for the quick answer Aljosha! Does that mean in the 2020 cycle? We would ideally like to stay on 2020 LTS.
     
  19. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    And while we are at it: A simple (revised) example script that demonstrates the fetching api (for example fetching the lowest mip of everything at start) would be great too. The current example breaks when a mesh renderer has more than one material assigned.
    Even better: An option in the sample VT node (something like always fetch lowest mip). Usually it's not a problem having blurry textures during fast movements as long as there is something that resembles the actual texture instead of just a color. I think for most people on modern platforms the cost of having a 128x128 version for all textures being in memory would be negligible (especially compared to what "not streaming" would cost).
     
  20. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    VT support for transparency won't land before the 2021 cycle.

    Here you have an experimental script that makes sure the lowest mip is in memory for every VT texture. This is just an example. It should be customized for your specific use.
     

    Attached Files:

  21. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Great, thank you!

    Regarding transparency: Do you mean supporting transparent objects, or fetching the textures of objects behind transparent objects? The latter is the one that's very problematic for us.
     
  22. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    I was just testing the VTManualRequesting script and I encountered a very strange behaviour: I get a null reference exception when I have reflection Probes in the scene (the problem is solved by disabling the reflection probes) What makes it strange is, that this doesn't occur if I create a new reflection Probe, only with ones that have been in the scene for I would guess more than a year or so.. Do you know if something changed with the reflection Probes? I remember that in one of the earlier HDRP versions, there was an actual mesh renderer part of the reflection Probe to display the chrome gizmo. I might have traces of these old (hidden) mesh renderers still in my reflection probes, but I have no idea how to access them.
     
  23. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Ok solved it, for anyone who encounters the same problem: You can access the old mesh renderer by using the inspectors debug mode. You can then simply remove the mesh renderer component.
     
  24. flyer19

    flyer19

    Joined:
    Aug 26, 2016
    Posts:
    121
    vt.png VT2.png
    still confuse with profiler, Texture show 1.26g, VirtualTexture show 384m.Which is correct??
     
  25. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Both are correct.
    Did you set "Virtual Texturing Only" = true in the texture importer on each texture that you assign to a VT property (and strream with VT)? If not, then all textures will be entire loaded in memory, and on top of this the VT GPU caches will be created (384MB).
     
    MartinTilo likes this.
  26. pbritton

    pbritton

    Joined:
    Nov 14, 2016
    Posts:
    159
    What is the difference between Virtual Texturing and Sampler Feedback with DirectX 12 Ultimate? This is assuming there is a difference. If the technologies are trying to achieve the same goal, is one better than the other and is Virtual Texturing going to be implementing the tech described with Sampler Feedback or are they different sides to the same coin?
     
  27. TheOtherMonarch

    TheOtherMonarch

    Joined:
    Jul 28, 2012
    Posts:
    862
    Tiled resources tier 1-2 are DX12 and Sampler Feedback and Tiled resources tier 3 are DX12U features. DX11.2 also had tier 1-2. This Virtual Texturing is DX11.

    Those are hardware features while this is software. The DX features are also building blocks while this is a complete system. Unity has for a long time had a DX11.2 Tiled resources implementation called Sparse Textures. These are limited to 16k.

    Hopefully, this Virtual Texturing will eventually implement some of the DX12 features, since DX12 adoption is becoming mainstream. Widespread DX12U feature adoption is still far in the future.
     
    Last edited: Nov 12, 2020
    pbritton likes this.
  28. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Our Unite presentation on Streaming Virtual Texturing is now online. The 25 minute presentation explains the Virtual Texturing basics, compares it with mipmap streaming, and shows how to convert a material to using SVT in shader graph. I'd like to create another half hour video with a complete editor walkthrough on how to convert a project to SVT. Let me know what you'd like to see in that walkthrough.
     
    apkdev likes this.
  29. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Nice Presentation, especially the profiling part was really helpful to determine the GPU Cache Size we need. On question though: What's the best way to determine the CPU Cache size? What's the effect of the CPU Cache in general?
     
  30. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Another question related to the manual requesting. The script you posted earlier here works great, and looking at the profiler, the Request Region calls seem to be really fast, although "VirtualTexturingEditorManager.FindTextureStackHandle" costs up to 9 ms on our Dev machine... Is there any way to optimize this? Or is this going to be optimized as VT matures? Unfortunately we can't afford this cost in the actual game as is. Thanks
     
  31. PerunCreative_JVasica

    PerunCreative_JVasica

    Joined:
    Aug 28, 2020
    Posts:
    47
    Hi @AljoshaD,

    great video! We were using granite in the past with built-in RP, however we have switched to URP and we would like to know ETA for URP support mentioned in the video. I mean whether this is something near completion (2020.2/2021.1) or something far from experimental (2021.2+).

    Also could you elaborate more on support of Linux? It is not listed under supported platforms even though Vulkan API is already supported.
     
  32. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Thanks!

    On the CPU cache, if your hard drive is slow then using a larger CPU cache makes sense. Larger is obviously better if you have the memory to spend. Increasing the size reduces the number of reads from disk and reduces streaming artifacts due to the latency of the disk read (the data is still in cache). But there is a point of limited return that depends on your project. You will probably see little benefit above 256MB. It's something you need to experiment with. The CPU cache contains 1MB pages that contain multiple texture tiles.

    On the cost of FindTextureStackHandle, indeed, this is very high right now. It's definitely something we will improve, although it's not planned yet.

    On URP, this will not land before 2021.2 and potentially it will land later. However, everything is available for you to add VT support yourself to URP. We first want to make SVT feature complete before rolling it out to URP. We see Assetbundle support as a major missing feature. Can you do without?

    Linux is indeed support, the docs need to be updated.
     
  33. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
  34. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Indeed, Procedural VT is still in development and I expect that the API will change significantly.
     
  35. PerunCreative

    PerunCreative

    Joined:
    Nov 2, 2015
    Posts:
    113
    Could you please elaborate more on steps required for URP implementation? It would be great if you could provide us some basic documentation for custom SRP (in this case URP) implementation. I mean something similar to the "Converting shaders" section in the original documentation (add three Granite specific properties, include GraniteUnity.cginc, use #pragma multi_compile __ GRANITE_RW_RESOLVE for single pass resolver etc.).

    One additional question, what is the resolver implementation in the latest version? Presentation only mentions automatic detection based on main camera, are you using the single pass resolver with additional output buffer or is this something new/superior?
     
  36. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    We use the single pass resolver with additional output buffer

    You can take a look at HDRP to see how SVT is supported. Unfortunately I don't have more info to share.
     
  37. PerunCreative

    PerunCreative

    Joined:
    Nov 2, 2015
    Posts:
    113
    URP port:
    Migrating cache settings, initial setup etc. are quite straight forward. URP + Shader graph + SVT seems to working too (I haven't made deep analysis of the shader yet). However texture is rendered in the lowest mip. Detection of the active tiles looks hardcoded to the HDRenderPipeline along with some GBuffer injections etc. Is SVT implementation deferred only? If so how are you planning to implement SVT for forward URP, because at the moment it seems that you can't add SVT support without the GBuffer or am I missing something?
     
  38. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    SVT supports both Forward and Deferred in HDRP. It binds an extra render target in both cases.
     
  39. Kmsxkuse

    Kmsxkuse

    Joined:
    Feb 15, 2019
    Posts:
    305
    I just want to throw in my two cents and say I am eagerly awaiting SVT in Universal RP.

    I've tried it out in HDRP and I am very impressed. I tacked on the full eight 20k textures of NASA's Blue Marble on a single sphere and while it did take 7 min to load, it did eventually and with minimal artifacts.

    Higher speed globe rotations did result in the google maps like block by block loading along the edges of the camera but that's just me playing around with a 80k texture on a single object.

    While HDRP did result in some very pretty globes, its additional post processing and everything else is very annoying and quite excessive for the miniproject I'm intending on making.

    So I'll just slam my 16k texture onto a single sphere and hike my minimum requirements up to a 2GB GPU. I'll be content on waiting the 2 or 3 years required for SVT to be ported to URP. It's a great feature but not necessary.
     
    AljoshaD likes this.
  40. PerunCreative_JVasica

    PerunCreative_JVasica

    Joined:
    Aug 28, 2020
    Posts:
    47
    Hi @AljoshaD,

    two questions.

    1. How to adjust global mip bias offset? Reducing texture quality is crucial for lower quality settings (equivalent of the Quality settings in the legacy version)
    2. How to visualize current content of the cache? Debug tiles feature is great, but for deeper debugging it is not sufficient.
     
    neko84086 likes this.
  41. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    Hi,
    1. currently you cannot set the mip bias yourself. If you set the cache size lower, the bias will be automatically set (higher) to prevent cache trashing. In the future we want to provide more control over the mip bias.

    2. there is no easy way to visualize the content of the cache. What problem are you actually trying to solve? Why do you need to inspect the cache?
     
  42. PerunCreative_JVasica

    PerunCreative_JVasica

    Joined:
    Aug 28, 2020
    Posts:
    47
    1. Please link this to the texture quality parameter in the Quality Settings, because it is quite annoying to balance multiple texture systems (e.g. it looks awful to have reduced texture quality on transparent objects, lightmaps, vegetation... while having opaque objects in high res, consistency is really important here).

    2. With the cache overview it was much easier to adjust cache size, mip biases, prefetching etc. because you could easily detect what is being streamed, how often, utilization of the cache, how long does it take to populate the cache after load/teleport etc. This is much hard to do so with only tile debugger and renderdoc or something like that.

    3. Why does resolver requires CommandBuffer in the Process function? I am in the process of converting SVT from HDRP to URP. Shaders, cache management, VTFeedback rendering (additional forward buffer) with downsampling are working as expected, but the Process function in the resolver (followed by the VT.System.Update) has zero effect (cache remains empty, but doing manual requesting works correctly). I've probably bug in the command buffer execution when downsampling the color buffer, but still I don't understand what is the additional usage of the buffer in the resolver. The resolver only needs the latest lowres VTFeedback texture for tile streaming evaluation, right? Or is there some additional GPU buffer processing which is done internally?

    *Edit* Ok I think I understand now. The process function wraps async readback done by ProcessVTFeedback. However the async readback is never performed even though the execution flags are valid and the RTI of the downsampled RT is provided. Since it is injected method I can't progress without knowing why it fails :/ The Internal_ProcessVTFeedback_Injected throws zero errors/warnings
     
    Last edited: Jan 8, 2021
    Kmsxkuse likes this.
  43. AljoshaD

    AljoshaD

    Unity Technologies

    Joined:
    May 27, 2019
    Posts:
    220
    PerunCreative_JVasica likes this.
  44. dieterdb

    dieterdb

    Unity Technologies

    Joined:
    Apr 25, 2019
    Posts:
    6
    About the resolving issue, did you call UpdateSize with the dimensions of the RTI? You could call this every frame as it only does something if the dimensions are changed, otherwise it will just early out. This will setup the internal state (incl the asyncreadback)

    More in-depth help about fixing resolver related issues.
    There are two reasons why I would expect the resolver not streaming in new tiles:

    1. Something is wrong with the setup of the resolver, aka the residency analysis never triggered. There are a number of reasons this could happen like the passed-in RTI is invalid (or has zero dimensions) or you did not call UpdateSize on the c# resolver object (and so the internal state is not properly setup). There are some other smaller cases it could "do nothing" but those should give you errors/asserts (as something about the VT system itself is in a real bad state).

    I would advice to look for the VirtualTexturingManager.ProcessFeedback marker in the profiler (on the render thread). if you see this, it is safe to assume the analysis is triggered (and so downsampling/readback/... worked). See reason 2 as triggered does not necessarily means anything will be streamed in.

    You could try to pass in the full-res RTI (rather then downsampled version) to validate if indeed the downsampling is the issue. While it will be slow, it can help to validate that the issue is indeed inside the downsample.


    2. Everything in step 1 worked but the content of the passed RTI is invalid. In that case, your shaders are not writing the correct values. You should use the frame debugger (or renderdoc if more info is needed) to validate what is happening. Typically this means one of the input parameters is either not or incorrectly bound. If regular VT sampling (with some manual requests) works, than this is most likely not the issue.



    Also a note of caution if you are using URP. Since feedback rendering is a separated pass, you might be tempted to do it at lower res (and not do a downsampling pass). While this could be a good idea, it will not work without some additional setup (nothing too scary but otherwise mip calculations will be wrong due to mismatched derivatives). This is already more into the realms of content/project depended optimalisations so I would advice to first get everything up-and-running and if you are interested we can write down a small guide on how to do it and the pro/cons.
     
  45. PerunCreative_JVasica

    PerunCreative_JVasica

    Joined:
    Aug 28, 2020
    Posts:
    47
    @dieterdb thanks for the advice!

    I've solved the issue. The problem was that the RT passed to the resolver had to be manually created using the RenderTexture.Create function. But discovering this issue was really challenging. Please add check to the resolver that the passed texture is actually created, also someone should improve the documentation on this subject, because the whole rt = new RenderTexture vs rt.Create() is shrouded in mystery. It would be more than helpful to have explained in the documentation how can allocation using just the new RenderTexture(...) be suitable for shader outputs, read/write operations in buffers etc. and when you have to manually add .Create (so that operations like the async read back in the resolver are working).

    However after this was resolved I've encountered another problem. I am currently working on support for multiple cameras (scene view, game view...), resolution changes etc. When the resolution is changed I am recreating the render textures that are used for VTFeedback, but for some reason once the resolver was populated with proper RT, calling UpdateSize has zero impact and console is spammed with:

    AsyncGPUReadback - Out of bounds arguments - src offset(0,0,0,0) dst dim(240,135,1) src dim100,56,1)

    The previous VTFeedback was discarded, new VTFeedback was created, passed values to the resolver in the UpdateSize function are the same as the width/height of the newly VTFeedback, but for some reason the CurrentWidth/Height in the resolver remains unchanged o_O Is there some additional steps needed to done besides releasing previous RT, creating new RT and updating the size of the resolver to make this work?
     
  46. dieterdb

    dieterdb

    Unity Technologies

    Joined:
    Apr 25, 2019
    Posts:
    6
    I'll add some additional checks inside the resolver implementation to not only catch these kind of "invalid parameter early outs" but also provide actionable feedback.

    I flagged your concerns about the unclear documentation of the RenderTexture behavior with out internal docs team.

    You can have multiple (independent) resolver objects so in that use case one per view might be a valid solution.

    But that might not fix the issue you are seeing about the "out of bounds error".
    This one is actually related to the fact that HDRP uses the RTHandle system (rather then RenderTextures) and that system does not downsize (but rather just uses a subrect of a full resolution target). The Resolver object mimics this behavior (in the editor, in-game will always rescale).
    You have resized your RenderTexture (but the Resolver object did not, even though you called UpdateSize) so that might be the reason you see this error. Can you try to use the Process overload taking a subrect by passing in the dimensions of the actual RenderTexture? Alternative recreating the resolver might work (though that will come with some performance penalty).

    The resolver (not)resizing behavior is too restrictive and I will update this so the resolver is more SRP agnostic. This is very good feedback!
     
  47. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    With Unity 2021.1 in beta now, do you have any news/updates regarding adressables support and a way to render Virtual textures behind transparent objects (like windows) in this tech cycle?
     
  48. Kmsxkuse

    Kmsxkuse

    Joined:
    Feb 15, 2019
    Posts:
    305
    In my experience, VTs work on opaque objects behind transparent ones perfectly well.

    They just dont work if the material the VT is rendering on is transparent. You can have a VT on a wall behind a window but not on the window itself.
     
  49. Onat-H

    Onat-H

    Joined:
    Mar 11, 2015
    Posts:
    195
    Unfortunately it currently doesn't. It' likely because of the way the textures to be streamed are chosen based on the view frustum, everything behind windows for example remains untextured until there is nothing "occluding" the view between camera and the texture to be streamed. (Even if it's a transparent object)
     
  50. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Hi!

    I want to add two "sample virtual texture" nodes inside my shader graph, but the second of them is intended to be optional. I'm planning to feed a set of custom lightmaps (4 texture maps) into each of these nodes and lerp between them in runtime in some situations. The thing is, I will fill both of the sampler nodes (with 8 textures in total) for some materials and only one node for the others. So one of the samplers may remain empty in some cases. Is it a bad approach? Is it better to create a separate shader with only one VT sampler for such scenarios?

    Also, If you use more than 8 regular texture 2D samplers in shader graph, you'll get an error "Maximum ps_4_0 sampler register index (16) exceeded". And you'll have to reuse sampler states. How does it apply to VT samplers, since they take 4 texture maps each? Do they count as 4 samplers? Or if they don't, does it mean, I can use 8 VT samplers (32 texture maps in total) without running into this error?

    And is it okay to use both VT samplers and regular 2D/3D samplers inside the same graph?

    Thanks!