Search Unity

Question Display an image in native resolution of a Head Mounted Display

Discussion in 'VR' started by valentin_siebenkees, Jan 19, 2021.

  1. valentin_siebenkees

    valentin_siebenkees

    Joined:
    Nov 4, 2020
    Posts:
    2
    Hello Community,

    I am trying to analyze the displays of vr/ar devices. (At the moment I have the HP Reverb and the Hololens2).
    I am currently working with Unity and the Mixed Reality Toolkit and I am new to both.
    For the measurement I want to display a test image in the devices native resolution and if possible in the origional quality without any compression.
    I would appreciate any help on how to achieve this goal!

    So far I have my test images in native resolution (atleast for the HP Reverb, since the Hololens is a little different) and im playing around with ways to display them. I tried rendering them as Sprite or attaching the texture to a material and an object, the issue im having here is that I have to place the Sprite or Object in a certain distance in the 3D unity space to make it visible on the HMD, but I feel like changing the z-parameter changes the resolution of the Sprite/Object.

    Is there something like a reference point/distance, where the resolution is actually the native resolution?

    Are there any other approaches to display an image?

    Also I am not quiet sure which settings i have to choose to load the texture into the Unity editor, it seems like there is a lot of interpolation happening there.

    Thank you very much for any help and greetings,

    Valentin
     
  2. joejo

    joejo

    Unity Technologies

    Joined:
    May 26, 2016
    Posts:
    958
    Knowing what you are trying to do/accomplish may impact any solution that can be offered. Can you provide more information about what it is you are doing and why?
     
  3. valentin_siebenkees

    valentin_siebenkees

    Joined:
    Nov 4, 2020
    Posts:
    2
    Hello Joejo,

    Thanks for your reply!

    As you wanted, some further information about what I´m trying to do:

    My goal is to evaluate displays of VR/AR devices in various terms, for example luminance or color representation.
    To achieve this goal I have professional measurement cameras.
    For the measurement the VR/AR devices will be stationary mounted with a measurement camera placed in the eyebox of the device.
    In the future I want to create measurement automations where i can control my measurement camera and the image I am displaying on the device.

    But right now I am only concerned with displaying images on the VR/AR devices.

    So basically I am looking for a way to display a test image I created in the native resolution of the device ( for example 2160x2160px for the HP Reverb) in the exact same resolution so i can create evaluable data by measuring with the camera.

    In order to display my test images on the device I tested to approaches so far:

    1.Rendering my test images as Sprites
    2. Adding the test image to a material and then adding the material to an object (cube).

    But to make it visible for the real camera or for me as a user in my stationary set up i have to place the Sprite/Object away from the virtual unity camera (because placing it inside of the virtual camera would of course end in not seeing it as a user). Therefore I attached a script to the object to change it´s z position at runtime for testing and from my experience it looked like changing the resolution of the rendered image upon changing the z position.
    So these two approaches don´t work for me, because i can´t produce evaluable data if the test image I am measuring doesn´t have the resolution of the display.

    Please let me know if you need any more information!

    And again thank you very much for your help.

    Best regards,
    Valentin
     
  4. joejo

    joejo

    Unity Technologies

    Joined:
    May 26, 2016
    Posts:
    958
    Here's some feedback I was able to gather for you, though it's not mine and not much I can say about it other than what you can glean from it.

    If I understand correctly, the customer is trying to match up rendered pixels on the display (captured with a professional camera) with pixels in a reference image to evaluate how well the display reproduces the image, etc. It’s not clear to me how they propose to deal with lens distortion for VR headsets though… maybe they’re removing the lenses??

    In any case, a more robust strategy might be to use a calibration image (e.g. a QR code or something like that) which they can use to automatically map the image space of the display and correlate with the reference image*. That solves at least half of the problem (establishing correlation between the reference and captured image spaces).

    The other potential half of the problem might be attempting to avoid any aliasing of the image – however this is basically impossible since reprojection is always sampling from the rendered image and there is no way to present pixels directly to the display without any interpolation. If this is something they care about, the best they can likely do is to ensure the image is high enough resolution to be more pixel dense than the physical displays, and accept whatever aliasing occurs.

    *Specifically, the idea would be to first display an image with some detectable structure, and use computer vision libraries to detect the marked regions of the image and compute the image space mapping. Then, display the desired reference image, and use the mapping to correlate pixels captured from the camera with pixels in the reference image.
    Hopefully that helps with what you are attempting to do?