Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Translating position value at one scale to a position at another scale?

Discussion in 'VR' started by chriskim713, Jul 6, 2017.

  1. chriskim713

    chriskim713

    Joined:
    Jul 5, 2017
    Posts:
    1
    What I'm trying to do is to create a canvas element with its height and width determined by two gameobjects that represent the top-left and bottom-right respectively. The gameobjects are placed where the cursor is located when I air tap. Now the Vector3 position that I get for the two gameobjects are very close to each other numerically. So when I take the difference between the x-coordinate values, or the y-coordinate values, etc., that difference value is very small. Then when I instantiate the canvas based on the dimensions calculated, the resulting canvas's width and height become really small under the RectTransform property. (note: scale is at (1, 1, 1)) It's too small to place any UI on it. This can be remedied if the scale value in RectTransform is really small (instead of (1, 1, 1)). I came across this from examples provided in the HolotoolKit. The canvas element in these examples have really small scale values. For example, the ColorPickerExample has (0.05520535, 0.05520535, 0.05520535) for the canvas scale. With this scale value, the canvas size can be 100 by 100.

    So my question is, I can I place a canvas at any scale given the desired boundary points?

    Thanks in advance!
     
  2. unity_andrewc

    unity_andrewc

    Unity Technologies

    Joined:
    Dec 14, 2015
    Posts:
    228
    I don't have much expertise on the HoloToolkit or its implementation, but based on your description, it sounds like an important and missing piece of the puzzle is the depth found in the positions that seed these x- and y-coordinate values you mention. I would need to look at some code to be absolutely certain, but I'm pretty sure that a scale of (1, 1, 1) in the UI system should, on the HoloLens, equate to the size of the canvas being represented in real-world values of meters for the canvas scale. If that's not the case, it sounds like something that needs fixing.

    So, when you make tap gestures, how do you convert the values given to you (in the tap gesture callbacks provided by GestureRecognizer) into scale values for your canvas object? Without that puzzle piece, I'm not sure how to provide guidance.