Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Could games project their simulation a few milliseconds into the future so players see now?

Discussion in 'General Discussion' started by Arowx, Jun 26, 2020.

Thread Status:
Not open for further replies.
  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Your game is running at 100 hz so you have 10 ms to
    • Update your games simulation on the CPU (7ms)
    • Pass the rendering information to the GPU (3 ms) [Your games 10 ms update time]
    • Send the pixel information update to the Monitor (5 ms)
    • The monitors display elements to change. (2 ms)
    If you know the complete time it takes for the player to see the update instead of showing them the world from 17 ms ago you could time shift the simulation so they get a view of the world as it is when they see it.

    Could this forward time shifting of your game work or would it add more problems than it solves?
     
  2. Joe-Censored

    Joe-Censored

    Joined:
    Mar 26, 2013
    Posts:
    11,847
    I always love your threads, because at the same time they get me thinking while also scratching my head why you'd want to do this in the first place :) :p

    But I would think you'd attack this from the reverse, where you wouldn't simulate into the future but cache the state from the past. You could store the values of everything relevant to your game as it was 17 ms ago (2 frames?), which you could treat as your "present" (even though it is really the past) for whatever you want that information for, and everything getting processed/rendered this frame, in the actual present, you consider the future in the context of your idea.
     
  3. and it is great up until the point where we add real time interaction...
     
    Joe-Censored likes this.
  4. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,042
    What does this solve and how it is different than Stadia's proposed predictive input?
     
  5. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well it's not about input lag it's about output or display lag, gamers often want faster hz displays because the time gap between what the game simulation is doing and the display is showing are reduced often halved or even quartered. Therefore they don't need to 'aim off' to compensate for display lag.

    60 hz - 16.66...ms
    120 hz - 8.33...ms
    144 hz - 6.94...ms
    240 hz - 4.16...ms
    360 hz - 2.77...ms

    Actually Stadia should have a system like this that compensates for network to display lag for each gamer so what they see and when they see it is not too out of date (I'm not familiar with Stadia's predictive input system).

    So gamers have to compensate for the fact that what they are seeing is a past snapshot of the games simulation time and adjust their actions to compensate.

    If we gave gamers a real time view of the world where they see the simulation rendered to represent the time it is displayed that would mean they would not have to compensate as much.

    And in theory if we combined that with input lag compensation they could respond in real time to the game.

    To fully do it accurately we would probably need some hardware signal back from the monitors to ensure we have the correct timing information to calculate the display time offset. And from input devices to provide accurate lag compensation calculations. Or accurate timing information from display drivers.

    Imagine games where regardless of frame rate it's an accurate representation of now.

    Could this actually also help in VR games with VR sickness, maybe output lag compensation could help?
     
    Last edited: Jun 27, 2020
  6. DauntlessVerbosity

    DauntlessVerbosity

    Joined:
    Feb 28, 2020
    Posts:
    37
    Kiwasi and angrypenguin like this.
  7. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    The issue is letting people interact with that system. The real concept of lag is between what the player sees on screen, the player's reaction to what was seen, and then what is rendered next based on the player's actions. The player is obviously part of that loop. Predictively rendering some future point in time does not take the player into account.
     
    angrypenguin and JoNax97 like this.
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    A bit more fun but similar results around 200 ms.

    It's not so much the absolute times as the offset from the simulation which is dependent on the movement speed of the player and game objects in the scene. So if you imagine a game running at 30 hz or 60 hz and your looking at 32 or 16 ms offset plus change. Which is not a lot if the game objects are static but with moving objects and distance to factor in you can start to grasp how quickly it becomes an issue in games especially ones with longer range targets.

    And arguably a good player will be adapting their play to allow for their own reaction time offset. How else would we be able to play real world fast paced sports with high speed objects or even catch a ball. So your point could be a moot point.
     
  9. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Your idea would definitely make VR motion sickness far worse. Basically your idea completely ignores player input. Imagine rotating your head while wearing a VR headset, but imagine the in-game VR camera just randomly moving without any regard for your actual head tracking. That is what you are actually advocating in this thread. Basically guaranteed puking in VR.
     
    Deleted User and JoNax97 like this.
  10. DauntlessVerbosity

    DauntlessVerbosity

    Joined:
    Feb 28, 2020
    Posts:
    37
    To get an idea of how incredibly small 17 ms is, try to start and stop this timer in that amount of time. I know that elite gamers say it makes a difference, but I'm not convinced it does.

    https://www.estopwatch.net/

    Click on the left side of the start button so that your second click is in place to stop the timer.
     
    Kiwasi likes this.
  11. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    As opposed to rendering some historic point in time that then players have to compensate for by predicting the future?

    Lets say the following dotted line with chevrons represent a target.

    What the players sees:
    ...>...

    Where the game simulation is at the point in time the player sees the scene

    ....>..

    By compensating for output lag the player sees the simulated world as it is now and not how it was.

    Consider Peeker's Advantage, it's an FPS trick where the peeker knows they have a split second window of time before the game updates to show them to the enemy player. Just enough time to peek and shoot.

    Good video on Peeking...


    Of course this is often used in network play but how much of Peaker's Advantage could be reduced with a output lag compensation system?
     
    Last edited: Jun 28, 2020
  12. DauntlessVerbosity

    DauntlessVerbosity

    Joined:
    Feb 28, 2020
    Posts:
    37
  13. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194




    https://blurbusters.com/nvidia-study-reveals-240hz-gives-you-edge-in-battle-royale/

    Nvidia's study seems to indicate a correlation between gaming refresh rate hz and performance for players.

    Therefore if there is a jump in performance going from 16 ms to 7 ms and again at 4 ms then we could also see a display output offset impacting gaming performance.
     
  14. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  15. DauntlessVerbosity

    DauntlessVerbosity

    Joined:
    Feb 28, 2020
    Posts:
    37
    Now, I'm not saying that they're lying, but they do have a monetary motive to push the idea that their more expensive hardware matters a lot. I'd like to see the data from a neutral party.

    There is a point at which our technology is good beyond the ability to matter to the human brain. The numbers say that in many cases we've reached that. Now, if your game is slowing down to an actually perceptible FPS issue, that's an issue. But if the difference is far smaller than we can perceive and far smaller than the delay our own neurons insert into the equation, then the differences advertised might only be a well done marketing gimmick.
     
  16. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
     
    Ryiah likes this.
  17. DauntlessVerbosity

    DauntlessVerbosity

    Joined:
    Feb 28, 2020
    Posts:
    37
    The first thing that video says is "sponsored by Nvidia". Not neutral. Don't get me wrong, I love my Nvidia graphics. I've been a fan forever. All of my graphics cards have come from them since the 90's. No hate on them. They're just not neutral.

    I'd like to see a study done by neurologists, perhaps from a top research university. But that's me. Maybe I'm weird.
     
    angrypenguin likes this.
  18. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Feel free to build a prototype of a game to test your idea. If you do that, then you will see what I mean. Your idea does not take player inputs into account. Your idea basically amounts to trying to predict what the player will do before the player does it, and then rendering that automatically before receiving the player's input. The idea would be fantastic if and only if you are able to reliably predict every player input correctly in advance. If you fail even once to correctly predict what the player was going to do, then the game will instantly feel disconnected from the player inputs.

    Arowx, feel free to completely ignore this fact and just post a bunch of unrelated videos and images.
     
    angrypenguin and Deleted User like this.
  19. JoNax97

    JoNax97

    Joined:
    Feb 4, 2016
    Posts:
    611
    Your premise is flawed. It's people who are competitive who invest in the hardware necessary to reduce frame time.
    So you can argue that performant players buy the pro hardware and not the other way around.
     
    angrypenguin likes this.
  20. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What about the data they discovered on how total play time boosted performance and time using higher refresh rates improved performance faster...



    And you seem to be base your counter-premise on how good players are, based on how competitive they are and their disposable hardware income.
     
  21. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Hey that's one of these time where I'm surprise the direction the discussiopn goes ... I EXPECTED people say "nothing new, it's DEAD RECKONING EXTRAPOLATION, ie a quite old method some action game already used, ESPECIALLY over the internet.

    Having followed and being aware of many research on the area, he is right, the player who benefit the most are casual and noob, pro do better but can compensate even low fps even at competitive level.

    They did a previous video without sponsor, but nvidia seen that as an opportunities, so they redid a bigger test that became that video, result is mostly the same, which mean nvidia is only cashing on something already given, also it match research in similar domain.
     
    Arowx and JoNax97 like this.
  22. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    The difference is that dead reckoning extrapolation is common in netcode, and Arowx seems to actually be talking about local player input prediction. In netcode, dead reckoning extrapolation tries to predict locations of other players based on last known location and movement vector. The netcode solution works well in many games, especially if their is a decent system in place to smoothly correct any mispredictions.

    By contrast, local player input prediction would try to predict the player's next inputs and try to react to those inputs before the inputs were received. I doubt there is a solid way to smoothly correct mispredictions in local player input prediction. For example, if I move my mouse left and the local player input prediction thought I was going to move the mouse right, then it would feel pretty jarring to watch the camera turn to the right.
     
  23. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,614
    Yeah, that was my thought, too. As an example, Rigidbody already has two smoothing modes, one of which relies on extrapolating future expected positions of objects. The purpose is different, but the approach is the same.
     
  24. SparrowGS

    SparrowGS

    Joined:
    Apr 6, 2017
    Posts:
    2,536
    This breaks casuallity, you can't control the present from the future of the same timeline.
    i think it will just make stuff jump more when trying to reconcile the input with the simulation.

    Exactly what i thought.
     
  25. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Every frame you are doing something like this:
    Code (CSharp):
    1. player.position += movement * speed * deltatime;
    Every time your cycling through your frame you are updating inputs and working out where the player has moved from the last frame to the where they will be in the next frame.

    In theory if the refresh rate was 16 ms and the input ran at 1000 hz or 1ms intervals there could be 16 new inputs most of which you will have to buffer until the next frame.

    Even with normal inputs the player could send input signals that occur within the frame time window but after the time that inputs are calculated so they will not be read until next frame.

    If we adopted output lag compensation we would change the above code to this.
    Code (CSharp):
    1. player.position += movement * speed * (deltaTime + outputLagOffset);
    And if our input system could capture not just the input but the exact time it occurred then we can add negative time inputs to the next frame.

    Or better still inputs could be running at 1000 hz in their own input loop and the rendering loop would just get the latest set for the next frame.
     
    Last edited: Jun 28, 2020
  26. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Nah it's done by so many offline game too, mario 64 do it, when you input a direction, it look forward, sample 4 steps, then resolve, coyote time is another similar idea example, you sample future position in empty space, but only resolve a few millisecond behind, so you allow the player to walk, motion matching do many step in the future to blend in the present, etc ....
     
  27. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Try building a prototype of this, and you will see it does not feel any better that way.
     
    angrypenguin and JoNax97 like this.
  28. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Are you saying Mario 64 did local player input prediction or something else? Do you have a URL that details that?
     
  29. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,553
    Let's not forget about 80 millisecond delay in perception.
    https://blogs.scientificamerican.co...g-in-the-past-and-other-quirks-of-perception/
    Humans live in the past. What you perceive as "now" happened 80 millisecond ago.

    Additionally in single player games one can think of the game preparing not current frame, but the next one. Meaning, it is already calculating future values, whatever is on screen is current frame, and delay is player input lag.
     
    angrypenguin likes this.
  30. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    In depth article on input lag from mouse to display (Anandtech 2009)

     
  31. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    You have causality backwards here. People who play more games are likely to be better performers, and are likely to spend more money on gaming systems.

    A proper analysis would require the same players playing on different systems, and measuring how their performance changes as they switch systems.

    I note none of these curves start at zero. Which suggests that the differences are mostly in baseline measurements, rather than actual performance.
     
    DauntlessVerbosity and JoNax97 like this.
  32. Neonlyte

    Neonlyte

    Joined:
    Oct 17, 2013
    Posts:
    512
    This is possible to do, but would only resolve part of the latency problem. The 60Hz display rate is the bottleneck of the system you described. Even if the program updates the game world as fast as possible when the program receives the input signal (which only reduces the processing latency), the user could not perceive the update it until the feedback (in this case, updated display content) is provided.

    Help me understand why knowing the timestamp of the input event would eliminate the input latency? :confused:
     
  33. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Without specialist hardware to test and time your devices input lag (mouse/usb hub/motherboard/cpu/drivers/bios/os) you have no idea how long it takes.

    Also tests show that input latency can vary significantly over time so having an average device offset might be good but may not be accurate across hardware/software combinations/versions.

    With time synchronised input devices they could send you the exact millisecond in time the event happened then the game/app or OS can offset the output to compensate and provide a virtual zero lag experience.

    Video showing various mouse input latency on CS:GO running at 240 hz


    Gaming mice tended to have a 13-16 ms input to display lag. (if 10 ms is game/buffer/response time that's still 3-6ms that could be input lag).
     
  34. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I don't think this is the device input event time, it is more likely the system input event time, therefore I'm guessing this information will not compensate for input lag, but should help with input frame lag.

    It looks like timing synchronisation/information is not a fundamental part of the USB HIB interface https://www.usb.org/sites/default/files/documents/hid1_11.pdf

    Although it looks flexible enough to allow a device developer to add it as a feature.

    There is an option to poll a device for information so if you timed this request/response as a kind of ping a number of times (with a low level API) it should provide a decent input lag time.
     
    Last edited: Jun 29, 2020
  35. There is no such thing.
     
  36. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    TLDR
     
  37. Neonlyte

    Neonlyte

    Joined:
    Oct 17, 2013
    Posts:
    512
    But the event still takes time to reach your program, even if you could guarantee the timestamp is generated in reference to a universal timeline (like a global clock). You could use that latency info to offset some information based on time (like player's next position in a movement), but for any other things that does not care about time you still suffer from the event arriving late, like whether the player should be moving.

    A similar situation to your game input latency situation is Bluetooth audio. In that world, there are already ways to get latency info and manufacturers on all ends have adopted it, so a video player could sync graphics on the phone screen with the audio via a bluetooth headset. The problem is that because latency exists, the audio has to start playing ahead of graphics, so the either the player skips the first dozen milliseconds of the audio or the graphics does not begin until the audio is played with a head start. If the viewer wants to pause at an instance, the audio and video cannot, and can never stop together. You are basically trying to sync your game world with the player's input. As long as there is latency, you receive the information late, later than your intention.
     
    Last edited: Jun 29, 2020
    JoNax97 likes this.
  38. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    3,015
    Notice how that example is different from your idea so far, because it includes the player input (mouse input)? Everybody agrees that games benefit from reducing each latency. But your idea in this thread is basically to try to react to the player before the player does it, and that won't work to actually reduce latency. You cannot simply roll the time forward to get around latency, unless you can successfully implement a local player input prediction solution that correctly predicts player input every frame.

    Arowx, this thread is suffering from the same problem that most of your other threads suffer from. You have an idea. You post a bunch links, videos, and images to build hype. But you don't discuss the actual problem with your idea. In this thread, the actual problem is you cannot successfully build a rock solid local player input prediction system. Without a local player input prediction system, you cannot roll the time forward like you are advocating. Feel free to completely ignore this basic fact and post more unrelated images and videos, though.
     
    JoNax97 likes this.
  39. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,042
    I am gonna have to agree here, we aren't really talking about any solutions (or really even a clear problem), it is just sort of fantasy speculation over non-existent tech. That is not what this forum is for. It is interesting to talk theory about game design (the correct forum), but talking theory about hardware is off-topic. No one here is developing hardware or writing game-engines. Try reddit or /. for hypothetical hardware/application stuff. If you have a working prototype/application of what you share that runs in Unity, that might be valuable discussion, but not this. Ending.
     
    Neonlyte, JoNax97 and ShilohGames like this.
Thread Status:
Not open for further replies.