Hello, I'm using Scriptable Render Pipeline on the Quest2 and doing depth blending effect in the shader. It samples the screen depth texture and calculates an alpha factor with the depth of the object, so the edges of transparent and opaque objects will be blurred to inconspicuous. The effect is achievable on PSVR and SteamVR, but on Quest2 there is some "offset" in the depth blend area when I move my head, and when I stop moving, the offset will disappear. It's like the screen depth texture is "delayed" from what is rendered on the screen. I would like to know why and how to solve this problem. Thanks and regards!
Figured out a work-around: - Create an overlay camera that renders only the object with the shader that uses the depth texture (in my case it's a plane that's stuck to the camera and covers the entire view) - Add the new camera to the main camera's stack and make sure the main camera isn't rendering the aforementioned object - Make sure the "Depth Texture" setting in the camera component is On on the main camera and Off on the new camera Depth texture is no longer delayed since it is now getting sampled after the first camera is done rendering.