Search Unity

Distributed Timewarped Rendering - An Idea

Discussion in 'AR/VR (XR) Discussion' started by Dave-Hampson, Jan 9, 2015.

  1. Dave-Hampson

    Dave-Hampson

    Unity Technologies

    Joined:
    Jan 2, 2014
    Posts:
    150
    I had an idea this morning for a new type of VR scene rendering which uses two devices. I haven't written any code for this yet, but I find the concept extremely exciting. The basis for this idea is in a few places:

    - John Carmack's Timewarp (obviously)
    - The fact that Gear VR is very good at playing 360 degree video content, but the bandwidth is too huge to transmit it over the internet
    - Steam's Home Streaming technology (which proved to me that you can have low latency gaming over Wifi at 720p and beyond)
    - Nate Mitchell talking at CES about how Wireless HDMI "isnt quite there yet"

    So the basic idea is this:
    - User launches VR 'server' experience on PC
    - VR server opens up a TCP port
    - PC starts rendering to an internal cubemap image (at say 6x2048x2048) at the player's position
    - PC also renders a depthmap for each face
    - User launches VR 'client' on (say) Gear VR
    - Client connects to server and starts to receive 6x2048x2048 colour+depth video stream
    - Client renders it, but uses the updated head position and orientation to Timewarp the image to be exactly correct

    The key thing here is that the bulk of the rendering is happening on a high power device, but the final timewarp on the Android device means latency is low. So the mobile device (with limited GPU power) is using all its grunt simply timewarping the cubemap to look correct, at the highest possible framerate. It may also be possible to clock it down for less overheating. It also means that (like with standard Timewarp) the PC GPU doesn't have to hit 60-90fps, it could quite easily run at 30fps and as long as the player didn't move too fast, it would still look largely correct.

    What's (one of the) potential problems with this technique? Well, the Wifi connection is going to be on all the time, which could flatten the battery quickly.

    What would be the next step? Well I think step one would be for someone to try rendering a 6x2048x2048 cubemap with colour+depth and try displaying it on a headset with Timewarp.
    The next step would be to create an agreed protocol for the TCP connection: ideally it would have these properties:

    - An open protocol so anyone can implement it or support it (it could be even be retrofitted into an existing PC game)
    - Video stream for 6x2048x2048
    - Expandable for different video compression techniques
    - Support for just updating a portion of the cubemap instead of the whole thing (e.g. if the user is currently facing 'North' you can probably omit transmitting the 'South face' since they aren't going to be able to move their head 90 degrees in under <50 ms). Maybe even just send 3 cubemap faces, or 1 cubemap face and a portion of 3 or 4 others.
    - Communication from client -> server on head position and orientation
    - Communication from client -> server on joypad control state

    The next step would be to implement that in a game engine. Imagine if this could be a Unity Prefab that people could import and drop into their scene, this would instantly turn any game into a cable-free Gear VR experience.

    I thought "Distributed Timewarped Rendering" sounded like a good name for it, since it conveys the idea that the Timewarp process is happening not just in different parts of the engine, but also on different physical GPUs.

    Another advancement which could be made would be building on the idea of Distributed Timewarp, and making a more complex distribution, for example:
    - World: PC GPU, Tonemapping: PC GPU, Characters: Android GPU, HUD: Android GPU, Shadows: Android GPU
    - World: PC GPU, Tonemapping: PC GPU, Characters: PC GPU, HUD: Android GPU, Shadows: Android GPU
    Or even rendering multiple cubemap faces on different physical PCs. Or the reverse situation: allowing multiple headsets to connect as clients to the same PC game and serving multiple cubemaps, for a great multiplayer experience.

    I'm starting to think that this technique could end up being a kind of 'Deferred Rendering for VR'. It's interesting that it suffers from some of the same problems (e.g. transparency doesn't work).

    Thoughts? Has anyone done any of this already?

    *edit* Just realised that this could also be used in exactly the same way for audio as for visuals, the HRTF function for each sound source could be calculated on PC, or on the Android device, or both. Is there a way to 'timewarp' a previously HRTF-ed sound for a slight change in orientation? Probably not yet, but it's an interesting concept!
     
    Last edited: Jan 9, 2015
  2. wccrawford

    wccrawford

    Joined:
    Sep 30, 2011
    Posts:
    2,039
    I don't think the sound position will have changed enough to really make a difference, whereas the video position matters a *lot*. I'm not sure it makes sense to spend CPU cycles on trying to adjust that unless the latency is really bad, in which case the whole timewarp thing probably crashes and burns anyhow... Not to mention the effect on input latency, which can't be timewarped without a real time machine.

    The battery problem could be helped a bit just by having a USB port on the headset with a battery/capacitor that recharges the phone for a while, extending wireless playtime. The player *should* be taking breaks anyhow, which gives some charge time as well.
     
  3. Dave-Hampson

    Dave-Hampson

    Unity Technologies

    Joined:
    Jan 2, 2014
    Posts:
    150
  4. hustmouse

    hustmouse

    Joined:
    Aug 3, 2016
    Posts:
    1
    Hi Dave, have you implemented Cloud rendering for VR? Timewarp could be done for head rotation, but how about positional timewarp? It seems that it need depth buffer, but depth buffer is very hard to stream from cloud even under compression.