A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Graphics Experimental Previews' started by AljoshaD, Mar 18, 2020.
,3q~,but this demo player setting had enable Virtual Texturing , not work.
OK， need hdpr 9.0.0-preview.54
comfused with virtual texture momery still very big.and some time can not show the virtual texture profile.
any help?It seems vt not work in 2020.2.0.a19
in 2020.2 you should use HDRP 10 (and not 9-preview). We do have some small bugs with the profiler that should be fixed soon. Why do you think VT is not working? The screenshots show the VT debug lines so it looks like it is working. I'm not sure why you see 1GB of render textures, that might just be the editor. Do you see this in the standalone player? I don't think this has anything to do with the VT system.
confused with the rendertexture got 1g memory in editor mode .when make the level ,the momery will be crazy big too.vt seems can work,but total texture memory > > vt's 384m .Thanks for your reply ~,and when will get HDRP 10 .
We are developing an RTS game where you will only view a small segment of terrain at a time. I would think that depending on camera angle VT would allow higher detail non-tiled terrain textures.
It could make sense indeed. Mostly for the material mask which is unique everywhere. It could make sense for your tiling textures if you don't blend many, these are high resolution, and these aren't always completely visible.
We would not be tiling / blend textures. Ideally just one big custom ground texture.
Any update on the ability to use the Parallax Occlusion Mapping node in ShaderGraph with Virtual Textures?
The documentation states that Virtual Textures do not support AssetBundles. Does that mean objects loaded from bundles cannot use virtual textures at the moment? Is there a rough ETA for asset bundle support?
Another question: Linux does not show up in the "Supported Platforms" section, although Vulkan is supported. Is Linux really not supported, or is it just missing in the docs?
Materials in an assetbundle cannot use VT at the moment. The VT system will evaluate all referenced materials during the player build and generate the streaming data. Materials that are in assetbundles are not referenced and will not be detected. Therefore, your build will not contain the streaming texture data. We cannot store VT streaming data in asset bundles at the moment. It's our top priority at the moment to add this support for assetbundles. Our goal is to add it by the end of next year. It's a major development task though so no guarantees yet.
Linux is indeed supported in 2020.2.
A question that has come up, you can now use Streaming Virtual Texturing to stream a heightmap for displacement mapping in Unity 2020.2 with HDRP 10.
In the following Shader Graph, there is 1 VT property that stores the heightmap in the 4th layer. You need to add a second VT sample that has "Automatic Streaming" disabled and Lod Mode set to "Lod Level". This allows you to sample the VT property in the vertex shader to offset the vertex position.
Hi, is there any support planned for URP and mobile? There was android support in 2018.
Is there already a solution for objects behind transparent objects (windows etc.)? Something like a layer mask to ignore transparent objects maybe? Currently the system isn't able to handle the fetching of textures behind transparent objects, which really is a big problem for production use.
There is no solution yet for transparent objects. It's on our short term roadmap though.
We'll have presentation on Streaming Virtual Texturing during the virtual Unite conference next week. You can find more info here.
Thanks for the quick answer Aljosha! Does that mean in the 2020 cycle? We would ideally like to stay on 2020 LTS.
And while we are at it: A simple (revised) example script that demonstrates the fetching api (for example fetching the lowest mip of everything at start) would be great too. The current example breaks when a mesh renderer has more than one material assigned.
Even better: An option in the sample VT node (something like always fetch lowest mip). Usually it's not a problem having blurry textures during fast movements as long as there is something that resembles the actual texture instead of just a color. I think for most people on modern platforms the cost of having a 128x128 version for all textures being in memory would be negligible (especially compared to what "not streaming" would cost).
VT support for transparency won't land before the 2021 cycle.
Here you have an experimental script that makes sure the lowest mip is in memory for every VT texture. This is just an example. It should be customized for your specific use.
Great, thank you!
Regarding transparency: Do you mean supporting transparent objects, or fetching the textures of objects behind transparent objects? The latter is the one that's very problematic for us.
I was just testing the VTManualRequesting script and I encountered a very strange behaviour: I get a null reference exception when I have reflection Probes in the scene (the problem is solved by disabling the reflection probes) What makes it strange is, that this doesn't occur if I create a new reflection Probe, only with ones that have been in the scene for I would guess more than a year or so.. Do you know if something changed with the reflection Probes? I remember that in one of the earlier HDRP versions, there was an actual mesh renderer part of the reflection Probe to display the chrome gizmo. I might have traces of these old (hidden) mesh renderers still in my reflection probes, but I have no idea how to access them.
Ok solved it, for anyone who encounters the same problem: You can access the old mesh renderer by using the inspectors debug mode. You can then simply remove the mesh renderer component.
still confuse with profiler, Texture show 1.26g, VirtualTexture show 384m.Which is correct??
Both are correct.
Did you set "Virtual Texturing Only" = true in the texture importer on each texture that you assign to a VT property (and strream with VT)? If not, then all textures will be entire loaded in memory, and on top of this the VT GPU caches will be created (384MB).
What is the difference between Virtual Texturing and Sampler Feedback with DirectX 12 Ultimate? This is assuming there is a difference. If the technologies are trying to achieve the same goal, is one better than the other and is Virtual Texturing going to be implementing the tech described with Sampler Feedback or are they different sides to the same coin?
Tiled resources tier 1-2 are DX12 and Sampler Feedback and Tiled resources tier 3 are DX12U features. DX11.2 also had tier 1-2. This Virtual Texturing is DX11.
Those are hardware features while this is software. The DX features are also building blocks while this is a complete system. Unity has for a long time had a DX11.2 Tiled resources implementation called Sparse Textures. These are limited to 16k.
Hopefully, this Virtual Texturing will eventually implement some of the DX12 features, since DX12 adoption is becoming mainstream. Widespread DX12U feature adoption is still far in the future.
Our Unite presentation on Streaming Virtual Texturing is now online. The 25 minute presentation explains the Virtual Texturing basics, compares it with mipmap streaming, and shows how to convert a material to using SVT in shader graph. I'd like to create another half hour video with a complete editor walkthrough on how to convert a project to SVT. Let me know what you'd like to see in that walkthrough.
Nice Presentation, especially the profiling part was really helpful to determine the GPU Cache Size we need. On question though: What's the best way to determine the CPU Cache size? What's the effect of the CPU Cache in general?
Another question related to the manual requesting. The script you posted earlier here works great, and looking at the profiler, the Request Region calls seem to be really fast, although "VirtualTexturingEditorManager.FindTextureStackHandle" costs up to 9 ms on our Dev machine... Is there any way to optimize this? Or is this going to be optimized as VT matures? Unfortunately we can't afford this cost in the actual game as is. Thanks
great video! We were using granite in the past with built-in RP, however we have switched to URP and we would like to know ETA for URP support mentioned in the video. I mean whether this is something near completion (2020.2/2021.1) or something far from experimental (2021.2+).
Also could you elaborate more on support of Linux? It is not listed under supported platforms even though Vulkan API is already supported.
On the CPU cache, if your hard drive is slow then using a larger CPU cache makes sense. Larger is obviously better if you have the memory to spend. Increasing the size reduces the number of reads from disk and reduces streaming artifacts due to the latency of the disk read (the data is still in cache). But there is a point of limited return that depends on your project. You will probably see little benefit above 256MB. It's something you need to experiment with. The CPU cache contains 1MB pages that contain multiple texture tiles.
On the cost of FindTextureStackHandle, indeed, this is very high right now. It's definitely something we will improve, although it's not planned yet.
On URP, this will not land before 2021.2 and potentially it will land later. However, everything is available for you to add VT support yourself to URP. We first want to make SVT feature complete before rolling it out to URP. We see Assetbundle support as a major missing feature. Can you do without?
Linux is indeed support, the docs need to be updated.
@AljoshaD I did notice that Procedural Virtual Texturing API itself is already on Unity 2020.2 core: https://docs.unity3d.com/2020.2/Doc...ce/Rendering.VirtualTexturing.Procedural.html
I'm assuming we can't really do anything with this yet though?
Indeed, Procedural VT is still in development and I expect that the API will change significantly.
Could you please elaborate more on steps required for URP implementation? It would be great if you could provide us some basic documentation for custom SRP (in this case URP) implementation. I mean something similar to the "Converting shaders" section in the original documentation (add three Granite specific properties, include GraniteUnity.cginc, use #pragma multi_compile __ GRANITE_RW_RESOLVE for single pass resolver etc.).
One additional question, what is the resolver implementation in the latest version? Presentation only mentions automatic detection based on main camera, are you using the single pass resolver with additional output buffer or is this something new/superior?
We use the single pass resolver with additional output buffer
You can take a look at HDRP to see how SVT is supported. Unfortunately I don't have more info to share.
Migrating cache settings, initial setup etc. are quite straight forward. URP + Shader graph + SVT seems to working too (I haven't made deep analysis of the shader yet). However texture is rendered in the lowest mip. Detection of the active tiles looks hardcoded to the HDRenderPipeline along with some GBuffer injections etc. Is SVT implementation deferred only? If so how are you planning to implement SVT for forward URP, because at the moment it seems that you can't add SVT support without the GBuffer or am I missing something?
SVT supports both Forward and Deferred in HDRP. It binds an extra render target in both cases.
I just want to throw in my two cents and say I am eagerly awaiting SVT in Universal RP.
I've tried it out in HDRP and I am very impressed. I tacked on the full eight 20k textures of NASA's Blue Marble on a single sphere and while it did take 7 min to load, it did eventually and with minimal artifacts.
Higher speed globe rotations did result in the google maps like block by block loading along the edges of the camera but that's just me playing around with a 80k texture on a single object.
While HDRP did result in some very pretty globes, its additional post processing and everything else is very annoying and quite excessive for the miniproject I'm intending on making.
So I'll just slam my 16k texture onto a single sphere and hike my minimum requirements up to a 2GB GPU. I'll be content on waiting the 2 or 3 years required for SVT to be ported to URP. It's a great feature but not necessary.
The globe looks amazing. Runs at a solid 4.5 ms / frame where base requires 3.7 ms. Absolutely magical. Bravo in making it work.
Anyways, I couldnt get this feature out of my head so I threw together a trial project experimenting with the limits of virtual texturing and I've noticed a few things.
First is the most obvious, the lack of support for transparancy is a major roadblock. I've managed to make some overlays work by merging some of the layers together and playing with material UVs but the fact that a lit material can not be overlayed with an unlit one is noticable.
Second is the enforced bilinear / trilinear filtering on materials. Even if they have been designated as point textures. I encode rendering data inside the color values of a texture for use in a shader and any sort of filtering or post processing destroys that information. I've got the antialiasing down to a two pixel wide boundary between colors but I would prefer if there was an option to scale by nearest neighbor instead of average color.
I know the enforced filtering is due to the VT itself and not anything else because I've overwritten the unity generated mip map textures with my own nearest neighbor downsampled versions. On a plain unlit shader using SampleTexture2D, not virtual texture, the color replacement works perfectly. Using the same texture or setting as VT only then replacing the mips again, there's evidence of bilinear filtering. Even if nothing else has changed.
Otherwise, it's amazing. My requirements are very niche and as a whole, VT is an amazing feature.
Edit: Virtual textures crashes entire unity editor on limited mip mapped textures. (Texture with defined mip map count and then mip maps inserted into asset using Graphics.CopyTexture.) Well, there goes that plan.
1. How to adjust global mip bias offset? Reducing texture quality is crucial for lower quality settings (equivalent of the Quality settings in the legacy version)
2. How to visualize current content of the cache? Debug tiles feature is great, but for deeper debugging it is not sufficient.
1. currently you cannot set the mip bias yourself. If you set the cache size lower, the bias will be automatically set (higher) to prevent cache trashing. In the future we want to provide more control over the mip bias.
2. there is no easy way to visualize the content of the cache. What problem are you actually trying to solve? Why do you need to inspect the cache?
1. Please link this to the texture quality parameter in the Quality Settings, because it is quite annoying to balance multiple texture systems (e.g. it looks awful to have reduced texture quality on transparent objects, lightmaps, vegetation... while having opaque objects in high res, consistency is really important here).
2. With the cache overview it was much easier to adjust cache size, mip biases, prefetching etc. because you could easily detect what is being streamed, how often, utilization of the cache, how long does it take to populate the cache after load/teleport etc. This is much hard to do so with only tile debugger and renderdoc or something like that.
3. Why does resolver requires CommandBuffer in the Process function? I am in the process of converting SVT from HDRP to URP. Shaders, cache management, VTFeedback rendering (additional forward buffer) with downsampling are working as expected, but the Process function in the resolver (followed by the VT.System.Update) has zero effect (cache remains empty, but doing manual requesting works correctly). I've probably bug in the command buffer execution when downsampling the color buffer, but still I don't understand what is the additional usage of the buffer in the resolver. The resolver only needs the latest lowres VTFeedback texture for tile streaming evaluation, right? Or is there some additional GPU buffer processing which is done internally?
*Edit* Ok I think I understand now. The process function wraps async readback done by ProcessVTFeedback. However the async readback is never performed even though the execution flags are valid and the RTI of the downsampled RT is provided. Since it is injected method I can't progress without knowing why it fails :/ The Internal_ProcessVTFeedback_Injected throws zero errors/warnings
1. yes, that's a great point. We are looking into this so that you have texture quality controls that impact non-streaming, mip-streaming and vt at the same time.
2. did you try the VT profiler chart? It tells you about cache utilization, etc https://docs.unity3d.com/2020.2/Documentation/Manual/profiler-virtual-texturing-module.html
3. I'll forward your question
About the resolving issue, did you call UpdateSize with the dimensions of the RTI? You could call this every frame as it only does something if the dimensions are changed, otherwise it will just early out. This will setup the internal state (incl the asyncreadback)
More in-depth help about fixing resolver related issues.
There are two reasons why I would expect the resolver not streaming in new tiles:
1. Something is wrong with the setup of the resolver, aka the residency analysis never triggered. There are a number of reasons this could happen like the passed-in RTI is invalid (or has zero dimensions) or you did not call UpdateSize on the c# resolver object (and so the internal state is not properly setup). There are some other smaller cases it could "do nothing" but those should give you errors/asserts (as something about the VT system itself is in a real bad state).
I would advice to look for the VirtualTexturingManager.ProcessFeedback marker in the profiler (on the render thread). if you see this, it is safe to assume the analysis is triggered (and so downsampling/readback/... worked). See reason 2 as triggered does not necessarily means anything will be streamed in.
You could try to pass in the full-res RTI (rather then downsampled version) to validate if indeed the downsampling is the issue. While it will be slow, it can help to validate that the issue is indeed inside the downsample.
2. Everything in step 1 worked but the content of the passed RTI is invalid. In that case, your shaders are not writing the correct values. You should use the frame debugger (or renderdoc if more info is needed) to validate what is happening. Typically this means one of the input parameters is either not or incorrectly bound. If regular VT sampling (with some manual requests) works, than this is most likely not the issue.
Also a note of caution if you are using URP. Since feedback rendering is a separated pass, you might be tempted to do it at lower res (and not do a downsampling pass). While this could be a good idea, it will not work without some additional setup (nothing too scary but otherwise mip calculations will be wrong due to mismatched derivatives). This is already more into the realms of content/project depended optimalisations so I would advice to first get everything up-and-running and if you are interested we can write down a small guide on how to do it and the pro/cons.
@dieterdb thanks for the advice!
I've solved the issue. The problem was that the RT passed to the resolver had to be manually created using the RenderTexture.Create function. But discovering this issue was really challenging. Please add check to the resolver that the passed texture is actually created, also someone should improve the documentation on this subject, because the whole rt = new RenderTexture vs rt.Create() is shrouded in mystery. It would be more than helpful to have explained in the documentation how can allocation using just the new RenderTexture(...) be suitable for shader outputs, read/write operations in buffers etc. and when you have to manually add .Create (so that operations like the async read back in the resolver are working).
However after this was resolved I've encountered another problem. I am currently working on support for multiple cameras (scene view, game view...), resolution changes etc. When the resolution is changed I am recreating the render textures that are used for VTFeedback, but for some reason once the resolver was populated with proper RT, calling UpdateSize has zero impact and console is spammed with:
AsyncGPUReadback - Out of bounds arguments - src offset(0,0,0,0) dst dim(240,135,1) src dim100,56,1)
The previous VTFeedback was discarded, new VTFeedback was created, passed values to the resolver in the UpdateSize function are the same as the width/height of the newly VTFeedback, but for some reason the CurrentWidth/Height in the resolver remains unchanged Is there some additional steps needed to done besides releasing previous RT, creating new RT and updating the size of the resolver to make this work?
I'll add some additional checks inside the resolver implementation to not only catch these kind of "invalid parameter early outs" but also provide actionable feedback.
I flagged your concerns about the unclear documentation of the RenderTexture behavior with out internal docs team.
You can have multiple (independent) resolver objects so in that use case one per view might be a valid solution.
But that might not fix the issue you are seeing about the "out of bounds error".
This one is actually related to the fact that HDRP uses the RTHandle system (rather then RenderTextures) and that system does not downsize (but rather just uses a subrect of a full resolution target). The Resolver object mimics this behavior (in the editor, in-game will always rescale).
You have resized your RenderTexture (but the Resolver object did not, even though you called UpdateSize) so that might be the reason you see this error. Can you try to use the Process overload taking a subrect by passing in the dimensions of the actual RenderTexture? Alternative recreating the resolver might work (though that will come with some performance penalty).
The resolver (not)resizing behavior is too restrictive and I will update this so the resolver is more SRP agnostic. This is very good feedback!
With Unity 2021.1 in beta now, do you have any news/updates regarding adressables support and a way to render Virtual textures behind transparent objects (like windows) in this tech cycle?
In my experience, VTs work on opaque objects behind transparent ones perfectly well.
They just dont work if the material the VT is rendering on is transparent. You can have a VT on a wall behind a window but not on the window itself.
Unfortunately it currently doesn't. It' likely because of the way the textures to be streamed are chosen based on the view frustum, everything behind windows for example remains untextured until there is nothing "occluding" the view between camera and the texture to be streamed. (Even if it's a transparent object)