Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Ecs floating origin ?

Discussion in 'Data Oriented Technology Stack' started by Razmot, Aug 22, 2018.

  1. Razmot

    Razmot

    Joined:
    Apr 27, 2013
    Posts:
    302
    I suspect there would be some very efficient ways to implement a floating origin system with the ECS ( keeping track of absolute positions as a double3 and keeping the character / camera / rendering around position 0 ).

    But I wonder if it could be constantly done (per frame) or if I'd need to reset the positions when the camera/player reaches a distance of 2000.

    Any thoughts ?
     
  2. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,426
    If you don't need need per frame, then don't.
    Probably you would think closer to approaching of 10k.
    In my opinion 2k is far too early and unnecessary.
     
  3. Razmot

    Razmot

    Joined:
    Apr 27, 2013
    Posts:
    302
    I did my own floating origin in XNA a long time ago, and it was actually just using custom view/projection matrices and
    shaders - there was no such thing as moving transforms, no extra calculations, just another way to render stuff.

    The concept of a camera does not exist on the GPU, at low level it doesn't matter if the world moves or if the player moves. So a pure ECS rendering could work like that.
     
  4. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,426
    Sounds something similar, what I am trying achieve.
    But seams you got your own answer?

    I can advise, search ECS forum four double data type keyword.
     
  5. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,726
    HDRP is camera-relative rendering, so the camera remains at 0,0,0 at all times. This allows you to utilise ECS to shift everything periodically. I would expect this to not impact your framerate at all.

    You would not necessarily need doubles with this approach. As mentioned above, 10k is when things start breaking down numerically, assuming real world sizes and 1 unit = 1 meter.

    Obviously if your scales differ then you would adjust that estimate.
     
    Razmot and Antypodish like this.
  6. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,426
    I need to make myself more familiar with this approach.
     
  7. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,426
    Just putting it here for now as for reference (if don't mind), nothing conclusive in the context for the need, at least not for me. But I will search more on Relative Rendering.

    The High Definition Render Pipeline: Focused on visual quality
    https://blogs.unity3d.com/2018/03/16/the-high-definition-render-pipeline-focused-on-visual-quality/


    Edit1

    Relative to camera rendering

    https://forum.unity.com/threads/relative-to-camera-rendering.520875/

    Also there is link there to
    Double support for Burst compiler and new math library
    https://forum.unity.com/threads/double-support-for-burst-compiler-and-new-math-library.520394/

    With 2018 Unite Europ 2017 talk

    Edit2

    From
    Feedback Wanted: Scriptable Render Pipelines
    https://forum.unity.com/threads/fee...e-render-pipelines.470095/page-7#post-3384859

     
    Last edited: Aug 22, 2018
    Peter77 and hippocoder like this.
  8. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,426
    A bit more what I found on camera relative rendering

    Unity-Technologies/ScriptableRenderPipeline
    https://github.com/Unity-Technologies/ScriptableRenderPipeline/wiki/Camera-Relative-Rendering

    Quotation from document

    Camera Relative Rendering

    NotRobSheridan edited this page 22 days ago · 1 revision
    Camera-Relative Rendering in the HDRP

    What is it?

    The purpose of camera-relative rendering is to make rendering of distant objects (with large world space coordinates) more robust and numerically stable.

    How does it work?

    It accomplishes the task by translating objects and lights by the negated world space camera position (into the so-called camera-relative world space) prior to performing any other geometric transformations. The world space camera position is then subsequently set to 0, and all relevant matrices are modified accordingly.

    Therefore, in the shader, expect view and view-projection matrices (and their inverses) to be camera-relative, along with light (e.g. LightData.positionWS) and surface (e.g. PositionInputs.positionWS) positions. Expect most world space positions you encounter in HDRP shaders to be camera-relative. Note that _WorldSpaceCameraPos is never camera-relative, as it’s used for coordinate space conversion.

    How can I enable it?

    It is enabled by default in ShaderConfig.cs. If you change the value in the file, make sure to Generate Shader Includes in order to update ShaderConfig.cs.hlsl.

    How do I switch between coordinate spaces?

    Use GetAbsolutePositionWS() and GetCameraRelativePositionWS() defined in ShaderVariablesFunctions.hlsl.

    Examples

    If camera-relative rendering is enabled:

    *GetAbsolutePositionWS(PositionInputs.positionWS) *returns the non-camera-relative world space position.

    GetAbsolutePositionWS(float3(0, 0, 0)) returns the world space position of the camera equal to _WorldSpaceCameraPos.

    GetCameraRelativePositionWS(_WorldSpaceCameraPos) returns float3(0, 0, 0).

    If camera-relative rendering is disabled:

    *GetAbsolutePositionWS() and GetCameraRelativePositionWS() *return the position passed to the function without any modification.

    However, according to
    The High Definition Render Pipeline: Focused on visual quality
    https://blogs.unity3d.com/2018/03/16/the-high-definition-render-pipeline-focused-on-visual-quality/

    So by my understanding, relative rendering may not be applicable for older hardware? Even no need for astonishing shader effects.
     
    Razmot likes this.
  9. Razmot

    Razmot

    Joined:
    Apr 27, 2013
    Posts:
    302
    Thanks guys, that's super interesting and useful !
    Note that double2 / double3 are already in the current unity.mathematics.
     
    Antypodish likes this.
  10. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    557
    Relative rendering isn't all you want with floating origin style setups. You also want the precision for physics etc, which you don't get with relative rendering. Typically it starts getting obvious with things you move on top of (vehicles etc) first - stuttering/sliding in the movement
     
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,726
    Currently ECS hasn't got native physics support, but it's apparently on the cards for an indeterminable date, so that is a pretty big limitation of using ECS right now - many systems are still yet to come.
     
  12. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,426
    This was something I was concerned about. Therefore, I attempted to investigate a bit more, what the relative rendering is about. But indeed, I couldn't find anything concrete, which could help with floating origin case.

    For which, I don't mind wait for ECS physics support. Therefore, it leads me to thinking for now, typical tricks and solutions can be applied, as in Classic OOP, to achieve floating origin results.
     
  13. ristophonics

    ristophonics

    Joined:
    May 23, 2014
    Posts:
    30
    I have found a bit more breakdown under the 1000 meter mark. This is of course when moving at orbital speeds. I update the entire world back to (0,0,0) when the player gets 1km from the origin.

    Since the original Wiki gives a great quick fix I want to share my kinda sloppy but working code here for anyone who wants to use it. (works in 5.6)

    I am using this code in Orbital Dogfight VR. If you have a Rift please give a whirl. Its Free on Itch here:
    https://mogacreative.itch.io/orbital-dogfight

    Code (CSharp):
    1.  
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4. /// <summary>
    5. /// Morgan Garrard Mod Origin Transform
    6. /// </summary>
    7. public class OriginTransform : MonoBehaviour {
    8.  
    9.  
    10.     public static List<OriginTransform> OriginTransforms;
    11.     public Transform _ParentTransform;
    12.     public bool hasParticles = false;
    13.     public bool hasLineRender = false;
    14.     private bool onList = false;
    15.  
    16.     public ParticleSystem[] particles;
    17.     public LineRenderer[] lineRenderers;
    18.  
    19.     private void Awake()
    20.     {
    21.         _ParentTransform = transform;
    22.         if (hasParticles)
    23.         {
    24.             particles = gameObject.GetComponentsInChildren<ParticleSystem>();
    25.         }
    26.  
    27.         if (hasLineRender)
    28.         {
    29.             lineRenderers = gameObject.GetComponentsInChildren<LineRenderer>();
    30.         }
    31.     }
    32.  
    33.     private void OnEnable()
    34.     {
    35.         if (OriginTransforms == null)
    36.         {
    37.             OriginTransforms = new List<OriginTransform>();
    38.         }
    39.         if (!onList)
    40.         {
    41.             OriginTransforms.Add(this);
    42.             onList = true;
    43.         }
    44.     }
    45. }
    46.  
    +
    Code (CSharp):
    1.  
    2. using UnityEngine;
    3. /// <summary>
    4. /// Morgan Garrard Mod Origin Controller
    5. /// </summary>
    6. public class OriginController : MonoBehaviour {
    7.  
    8.     public GameObject controllerPos;
    9.     public float threshold = 950.0f;
    10.  
    11.     public int ListCount;
    12.  
    13.     ParticleSystem.Particle[] parts = null;
    14.  
    15.     public Transform[] activeTransforms;
    16.  
    17.     void Start () {
    18.         controllerPos = Camera.main.gameObject;
    19.     }
    20.  
    21.     private void GetActiveTransforms()
    22.     {
    23.         int count = OriginTransform.OriginTransforms.Count;
    24.         activeTransforms = new Transform[count];
    25.         ListCount = count;
    26.         for (int i = 0; i < count; i++)
    27.         {
    28.             activeTransforms[i] = OriginTransform.OriginTransforms[i].gameObject.transform;
    29.         }
    30.     }
    31.    
    32.     private void MoveToOrigin()
    33.     {
    34.         int count = OriginTransform.OriginTransforms.Count;
    35.         for (int i = 0; i < count; i++)
    36.         {
    37.             Transform t = OriginTransform.OriginTransforms[i]._ParentTransform;
    38.  
    39.             if (!t.gameObject.activeInHierarchy)
    40.             {
    41.                 //Debug.Log("DID NOT MOVE INACTIVE" + t.gameObject.name.ToString());
    42.                 continue;
    43.             }
    44.  
    45.             //Debug.Log("MOVING TRANSFORM" + t.gameObject.name.ToString());
    46.  
    47.             t.position -= controllerPos.transform.position;
    48.  
    49.             if (OriginTransform.OriginTransforms[i].hasLineRender)
    50.             {
    51.                 foreach (LineRenderer item in OriginTransform.OriginTransforms[i].lineRenderers)
    52.                 {
    53.                     Vector3[] points = new Vector3[item.positionCount];
    54.                     for (int l = 0; l < points.Length; l++)
    55.                     {
    56.                         points[l] -= controllerPos.transform.position;
    57.                     }
    58.                     item.SetPositions(points);
    59.                 }
    60.             }
    61.  
    62.             if (OriginTransform.OriginTransforms[i].hasParticles)
    63.             {
    64.                 foreach (ParticleSystem item in OriginTransform.OriginTransforms[i].particles)
    65.                 {
    66.                     MoveParticles(item);
    67.                 }
    68.             }
    69.         }
    70.  
    71.     }
    72.  
    73.     private void MoveParticles(ParticleSystem sys)
    74.     {
    75.         if (sys.main.simulationSpace != ParticleSystemSimulationSpace.World)
    76.             return;
    77.  
    78.         int particlesNeeded = sys.main.maxParticles;
    79.  
    80.         if (particlesNeeded <= 0)
    81.             return;
    82.  
    83.         bool wasPaused = sys.isPaused;
    84.         bool wasPlaying = sys.isPlaying;
    85.  
    86.         if (!wasPaused)
    87.             sys.Pause();
    88.  
    89.         // ensure a sufficiently large array in which to store the particles
    90.         if (parts == null || parts.Length < particlesNeeded)
    91.         {
    92.             parts = new ParticleSystem.Particle[particlesNeeded];
    93.         }
    94.  
    95.         // now get the particles
    96.         int num = sys.GetParticles(parts);
    97.  
    98.         for (int p = 0; p < num; p++)
    99.         {
    100.             parts[p].position -= controllerPos.transform.position;
    101.         }
    102.  
    103.         sys.SetParticles(parts, num);
    104.  
    105.         if (wasPlaying)
    106.             sys.Play();
    107.     }
    108.  
    109.     private void LateUpdate () {
    110.  
    111.         if (controllerPos.transform.position.magnitude > threshold)
    112.         {
    113.             MoveToOrigin();
    114.         }
    115.        
    116.     }
    117. }
     
  14. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    557
    Iirc Kerbal Space Program deals with this by also having "floating velocity" or whatever you'd call it. You also shift velocity to be (0, 0, 0) at the origin. It'll also greatly help reduce the amount of time you have to shift the origin. (It's probably also the source of the "kraken" issues they had earlier on, where bigger / more complicated crafts had a chance to randomly explode)
     
  15. ristophonics

    ristophonics

    Joined:
    May 23, 2014
    Posts:
    30
    Yeah I love Kerbal.. Orbital Dogfight borrows heavily from KSP but I have not yet mimicked their floating velocity.

    The two scripts above are intended to replace http://wiki.unity3d.com/index.php/Floating_Origin with more functionality to include LineRenderers and with no "Gameobject.Find()" which causes a decent amount of overhead if you need to update position speedily.
     
  16. btristan

    btristan

    Joined:
    Oct 15, 2018
    Posts:
    82
    I have a floating origin system that I like a lot. I have a system that takes user input and records the camera position in my 64-bit space. The camera is actually 4-5 different cameras; the first rendering from 1 to 1000 meters at regular size, the next from 1000 to 1000000 meters at 0.001 scale, and so on.

    Then I have a system that tracks which objects should be rendered by each camera.

    Then I have a system that generates the matrices based on the camera position and object position.

    Code (CSharp):
    1.     public sealed class RenderSystem : ComponentSystem {
    2.         private ComponentGroup _componentGroup;
    3.  
    4.         protected override void OnCreateManager () {
    5.             _componentGroup = GetComponentGroup(ComponentType.ReadOnly<Position>(),
    6.                                                 ComponentType.ReadOnly<Rotation>(),
    7.                                                 ComponentType.ReadOnly<VisibleLayerIndicies>(),
    8.                                                 ComponentType.ReadOnly<LayeredLods>());
    9.         }
    10.  
    11.         protected override void OnUpdate () {
    12.             DebugUi.RenderedObjects = 0;
    13.  
    14.             _componentGroup.ResetFilter();
    15.             NativeArray<ArchetypeChunk> chunks = _componentGroup.CreateArchetypeChunkArray(Allocator.TempJob);
    16.  
    17.             foreach (ArchetypeChunk chunk in chunks) {
    18.                 LayeredLods layeredLods = chunk.GetSharedComponentData(GetArchetypeChunkSharedComponentType<LayeredLods>(),
    19.                                                                        World.Active.EntityManager);
    20.  
    21.                 _componentGroup.SetFilter(layeredLods);
    22.  
    23.                 Profiler.BeginSample("Get Chunk Components");
    24.                 NativeArray<Position> chunkPositions =
    25.                     chunk.GetNativeArray(GetArchetypeChunkComponentType<Position>(true));
    26.                 NativeArray<Rotation> chunkRotations =
    27.                     chunk.GetNativeArray(GetArchetypeChunkComponentType<Rotation>(true));
    28.                 NativeArray<VisibleLayerIndicies> chunkVisibleLayerIndicies =
    29.                     chunk.GetNativeArray(GetArchetypeChunkComponentType<VisibleLayerIndicies>(true));
    30.                 Profiler.EndSample();
    31.  
    32.                 int layerIterations = math.min(layeredLods.Layers.Length, CameraRig.FarClipDistanceLayers.Length);
    33.  
    34.                 int chunkSize = chunk.Count;
    35.  
    36.                 for (int i = 0; i < chunkSize; i++) {
    37.                     if (chunkVisibleLayerIndicies[i].Value == Ints.NoBitsSet) {
    38.                         continue;
    39.                     }
    40.  
    41.                     for (int layerIndex = layerIterations; layerIndex >= 0; layerIndex--) {
    42.                         // Debug.Log($"Visible layers: {chunkVisibleLayerIndicies[i].Value.ToStringBinary()}");
    43.  
    44.                         if (chunkVisibleLayerIndicies[i].Value.IsBitSet(layerIndex)) {
    45.                             Profiler.BeginSample("Render Calculations");
    46.                             int layer = CameraRig.FarClipDistanceLayers[layerIndex].Layer;
    47.  
    48.                             float layerScale = math.pow(CameraRig.WorldCameraFactor, -layerIndex);
    49.  
    50.                             double3 relativePosition = (chunkPositions[i].Value - CameraRig.WorldCameraPosition) * layerScale;
    51.                             float3 layerScaleVector = new float3(layerScale);
    52.  
    53.                             float4x4 matrix = float4x4.TRS(relativePosition.toFloat3(),
    54.                                                            chunkRotations[i].Value.toNormalizedQuaternion(),
    55.                                                            layerScaleVector);
    56.                             Profiler.EndSample();
    57.  
    58.                             Profiler.BeginSample("Render Call");
    59.                             // TODO add GPU batching? (or is Unity batching these for me already? - see the frame debugger)
    60.                             Graphics.DrawMesh(layeredLods.Layers[layerIndex].Mesh,
    61.                                               matrix,
    62.                                               layeredLods.Layers[layerIndex].Material,
    63.                                               layer);
    64.                             Profiler.EndSample();
    65.  
    66.                             DebugUi.RenderedObjects++;
    67.                         }
    68.                     }
    69.                 }
    70.             }
    71.  
    72.             chunks.Dispose();
    73.         }
    74.     }
    The floating origin itself isn't that hard, it is just ObjectPosition - WorldCameraPosition. Rendering everything nicely is more of a challenge.
     
    TakuanDaikon, awesomedata and Razmot like this.
  17. ristophonics

    ristophonics

    Joined:
    May 23, 2014
    Posts:
    30
    Jebus, so beautiful and over my head. If you have the time could you explain a little more about what is happening in the script? So cool!
     
  18. btristan

    btristan

    Joined:
    Oct 15, 2018
    Posts:
    82
    I'll write a blog post some time.
     
    awesomedata and ristophonics like this.
  19. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,136
    It would really be nice to see the project setup for this.

    Does it matter if you're using HDRP or can you use the old standard shaders with this? (I ask because of that camera-relative rendering thing).
    This is all still a bit of voodoo for me, but I **think** I get the gist. I definitely would like to see that blog post explaining the prerequisites and limitations of a system like this. For example, I wonder about the scale factor of the distant cameras and why you chose the particular scale parameters you did. I also wonder about how you'd handle physics in a world that matched your camera view size -- especially when using ECS. Does this script actually manage to move the *physics* objects/colliders themselves, or does it only change where in the world the objects are rendered (so it can prevent vert/animation flickering)?

    Mind sharing some insights?
     
unityunity