Currently "world space" for different position and transform nodes means slightly different things when using Shader Graph with the HDRP. For some nodes the output is actual world space, and for others it's actually the camera relative world space. While this is obviously currently a bug, there's value in having direct access to that original camera relative position for certain types of effects. For example, anything that uses screen space derivatives, or even world space UVs, can benefit from using the camera relative position instead of the "real" world space position to reduce precision errors. Derivative based normals can start getting noisy after just 50 units from the world origin when close to the object, and after 500 units or so can become nearly unusable even when not inspecting it closely. World space UVs will start to break down on larger textures after a few thousand units off of the origin for 32 bit floats, and much sooner for half floats. This is true for even the LWRP, perhaps even more so since float precision issues are more apparent on mobile where half precision floating point is actually getting used (and plausibly in the future also desktop & console GPUs for all SRPs as native half support becomes more common). For the HDRP, where the position being passed between stages is the camera relative position, precision is lost when adding the camera position back in as the value gets quantized and the original precision can never be recovered. For the LWRP, where the position passed between stages is the world position, precision is lost on the initial transform, but could be partially reconstructed during interpolation. Instead the low precision values are interpolated leading to adding noise to the value, further reducing the apparent quality. So my feature request would to add a "camera relative world position" space to nodes. Additionally, I would change the LWRP to pass the camera relative position from the vertex to the fragment rather than the actual position. While this does add a single extra instruction to the vertex shader, this has no real additional cost for the fragment shader since many shaders that need the world position will likely also want the world view direction, so are already doing that "extra instruction" to subtract the camera position from the world position. Now it would just happen when calculating the world position and not for the view direction. TLDR: Add Camera Relative to the list of spaces used by the Position and Transform nodes. Always pass the camera relative position instead of the world position between the vertex and fragment for both SRPs to combat floating point precision issues caused by barycentric interpolation.