Search Unity

Change Mock HMD settings?

Discussion in 'AR/VR (XR) Discussion' started by Jmschrack, Oct 6, 2018.

  1. Jmschrack

    Jmschrack

    Joined:
    Jun 23, 2015
    Posts:
    5
    So SBS splitscreen was replaced with Mock HMD. I'm prototyping some custom VR hardware, however the field of view in Unity is locked to 111.96. I read that it is mimicking the Vive settings. Is there a way to change this?
     
  2. BrandonFogerty

    BrandonFogerty

    Unity Technologies

    Joined:
    Jan 29, 2016
    Posts:
    82
    Hi @Jmschrack!

    Unfortunately, you can not change the FOV.
     
  3. Jmschrack

    Jmschrack

    Joined:
    Jun 23, 2015
    Posts:
    5
    That's unfortunate. Since the hardware I'm working isn't the native Vive or Oculus, is there a way to enable Single Pass Stereo Rendering without going through the XR settings? (I attemped enabling multi eye in the Scriptable Render Pipeline, but without XR enabled, Unity just Crashes To Desktop when you run the scene in Editor.)

    I can get Unity to render to the headset properly using a two camera setup, but I'd like to get the performance savings of SinglePassStereo.
     
  4. BrandonFogerty

    BrandonFogerty

    Unity Technologies

    Joined:
    Jan 29, 2016
    Posts:
    82
    Hi @Jmschrack!

    You can not enable single-pass stereo rendering via SRP unless XR is enabled in the player settings. We have considered allowing developers to configure the mock hmd more extensibility in the past. If we were to allow this, what would be useful to configure? Thanks!
     
  5. Jmschrack

    Jmschrack

    Joined:
    Jun 23, 2015
    Posts:
    5
    Hey @BrandonFogerty !

    For this project, being able to change the FOV and IPD would be a great starting point. I think being able to set a projection matrix would be the next logical step.

    While we're on the subject of extensibility, the ability to set Shader constants from the Render Thread would be nice as well. The VR kit i'm prototyping with has a "Timewarp" style feature akin to Oculus', where you query the sensors right before rendering to set some shader values. This shifts/scales the image to accommodate for head movement. Without it, I can see a slight lag when I move my head around. It seems like I'll need to call a native rendering plugin to be able to set the shader values and render the distortion meshes. It is frustrating given how easily extensible everything else has been so far in SRP.

    I tried using the commandBuffer.setFloat functions, but that takes a valueType, if I could feed it a reference or delegate to generate the data at the execution time, that would be perfect.
     
  6. BrandonFogerty

    BrandonFogerty

    Unity Technologies

    Joined:
    Jan 29, 2016
    Posts:
    82
    Hi @Jmschrack!

    Thanks for the feedback! I want to make sure I understand your request regarding setting a shader param on the render thread. If I understand correctly, you are currently using CommandBuffer.SetGlobalFloat. The problem you are encountering, however, is that you are passing in a value to this method on the simulation thread (i.e. c# script). By the time the command buffer is executed on the render thread, a considerable amount of time has transpired in which case the value you passed in is no longer the most up to date value. Currently, the only way for you to solve this is by creating a custom render dll that intercepts events via CommandBuffer.IssuePluginEventAndData. You would rather have something like a CommandBuffer.IssueSRPEventData(c# delegate, data) in which case the c# delegate would be executed on the render thread. If that were the case, you could then set a shader property in the c# delegate?

    If my understanding of your request is not correct, then please ignore my response below.

    The reason we can't do this is that all c# scripts are executed on the simulation thread. You would be able to access objects that live only on the simulation thread in the render thread which would cause all kinds of threading issues. For example, users may try to manipulate the camera settings on the render thread which would not be allowed as the camera can only function on the simulation thread. We wouldn't want to synchronize the threads due to a new API like this as it would introduce a stall between the simulation and render threads which would hurt performance and add more latency. I think for now the best approach is what you suggested which is to make those kinds of performance critical changes in a native plugin which can provide more of a safe environment for developing render thread specific functionality.
     
  7. Jmschrack

    Jmschrack

    Joined:
    Jun 23, 2015
    Posts:
    5
    Salutations @BrandonFogerty !

    That is correct. The only caveat is that this render delay is only noticeable on a multi camera setup in lieu of a single pass stereo, in which the render delay was almost unnoticeable. (I say almost because I had that gut feeling that it was a tad off that I did not feel when trying other VR apps.) The faster the user turns their head, the more noticeable the delay is.

    Forgoing C# delegates, would it be possible to have a CommandBuffer.SetGlobalFloat/Vector that takes a ReferenceType instead of a ValueType? For example,
    public void SetGlobalFloat(string name, IntPtr someNativeFunction);
    Where the pointer refers to a C++ method of type of something like:
    extern "C" float UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API SomeNativeFunction();
    ? Or an IntPtr that references a Vector? That way, I could simplify my native code by only needing to change the value at the reference.

    It seems heavy handed to go through all the trouble of using a native rendering plugin to render the distortion mesh, shaders, camera textures, just to get the end result of changing shader variables from the render thread. On that subject, would you know of any easier way of accomplishing this end result currently?

    I understand this is an edge case that is probably only relevant to VR, as anywhere else, no one will notice a few milliseconds.
     
    Last edited: Oct 17, 2018
  8. neverfun

    neverfun

    Joined:
    Dec 13, 2013
    Posts:
    5
    Hey @Jmschrack

    It looks like I'm in the same boat as you on this, so your original issue about being able to configure Mock HMD attributes is not an isolated request. It's frustrating that I can build/run my project using 2017.1, but not 2017.2+

    Have you come up with any solution for using XRSettings.enabled and Mock HMD or are you using a (brute-force) two camera solution?
     
    GuyTidhar likes this.
  9. GuyTidhar

    GuyTidhar

    Joined:
    Jun 24, 2009
    Posts:
    320
    @Jmschrack, @morshmelo

    That makes the three of us. Are you guys still doing two cameras configuration?

    @BrandonFogerty

    Any updates as to using the XR for custom XR solutions? We're still currently brute forcing two cameras for left/right projections.

    Thanks!
     
  10. neverfun

    neverfun

    Joined:
    Dec 13, 2013
    Posts:
    5
    Yes. Since Mock HMD parameters aren't configurable at runtime, using two Cameras is my only option.
     
  11. Exession

    Exession

    Joined:
    Jun 18, 2018
    Posts:
    1
    Hi @BrandonFogerty ,

    I need to render stereo to a single interleaved target (where the 1st column of pixels is the right eye, the 2nd column the left, 3rd right, 4th left and so on).
    I'm currently doing this with two cameras and using a simple shader to combine the two targets using OnRender. While this works perfectly well it's obviously very wasteful and could all be achieved much more efficiently using single pass stereo rendering.
    None of the existing formats are useful to me and I've not found a way to adapt them.
    Is it possible within the existing unity setup (or even SRP)?
    Is there a way to insert a custom camera shader to do this?
     
    mcroswell likes this.
  12. Jmschrack

    Jmschrack

    Joined:
    Jun 23, 2015
    Posts:
    5
    Sorry @GuyTidhar We ended up going with a dual camera set up unfortunately. We briefly looked at using CheatEngine (the irony is not lost on me) to try and hack the values a compiled Unity game was passing in to the VRWorks SPS rendering matrix, but didn't have any solid results. I'm hoping as SRP gets fleshed out more, that we get a feature like this.
     
    GuyTidhar likes this.
  13. holo-krzysztof

    holo-krzysztof

    Joined:
    Apr 5, 2017
    Posts:
    18
    Is there any progress on this?

    Such a feature would be very interesting for us as well.
     
  14. tangobravo

    tangobravo

    Joined:
    Mar 23, 2018
    Posts:
    3
    Add me to the list of interested people too!

    Would it be possible for plugins to expose "Virtual Reality SDKs"? Then the core Unity APIs that the Mock HMD plugin uses could become part of the native plugin interface headers, and the Mock HMD SDK itself could potentially be open-sourced so that people wanting to customize a stereo rendering pipeline could start from there?

    I know that's probably a lot to ask, but would be ideal for my use case :)
     
    holo-krzysztof likes this.
  15. reintseri

    reintseri

    Joined:
    Aug 25, 2017
    Posts:
    7
    For what it's worth I'm also in need of being able to change the Mock HMD settings. Single pass rendering works great, however as I can't change the IPD and other settings - I'm forced to use multiple cameras instead of one.
     
    holo-krzysztof likes this.
  16. pm32847

    pm32847

    Joined:
    Mar 12, 2019
    Posts:
    2
    @BrandonFogerty
    when I use the Mock HMD on Android with the Single Pass option on, in the graphics debugger I don't see a texture array with left and right eye layers, but rather two separate textures.
    Does that mean that rendering is actually done with multipass?

    I was able to see the texture array with two layers (left and right views) with both a native multiview example I created and the Cardboard option in Unity.