# Locatable Camera - Augmented Reality Rendering

Discussion in 'VR' started by xhensiladoda, Oct 11, 2016.

Joined:
Oct 11, 2016
Posts:
2
Hi everybody,

I am trying to use augmented reality with HoloLens using my own marker detector, just as described in the Locatable Camera documentationhttps://developer.microsoft.com/en-us/windows/holographic/locatable_camera (at the paragraph Tag / Pattern / Poster / Object Tracking).

I am failing in drawing precise spot holograms at the corners of my marker (for now it’s just a chessboard).

Sequentially: I take a photo from wich I get the projection and camera2world matrices by using the PhotoCaptureFrame as:
Code (CSharp):
1. photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);
2. photoCaptureFrame.TryGetProjectionMatrix(out projectionMatrix);
With a solvePnp from OpenCV I would easily retrieve the rotation and translation from chessboard to camera; problem is that the projection matrix from hololens is quite strange:
1. Focal and Center Coordinates are not in pixel but normalized in range -1..1
2. Y axis is flipped
3. (3,3) element is -1 … ? what’s that????

So, what I do before calling the solvePnP is to adjust the pixel coordinates of the chessboard and to invert the sign of the ccx and ccy before computing the solvePnP(this is to take into account that -1 in position (3,3)), like this:
Code (Cpp):
1. for (size_t i = 0; i < ptI.size(); ++i) {
2.         float xp = ptI.at(i).x;
3.         float yp = ptI.at(i).y;
4.         //Point2d image_pos_to_zero = Point2d(xp / w, 1.f - yp / h);  //<---Is this flip in y-axis necessary? Why Microsoft does that in Locatable Camera?
5.         Point2d image_pos_to_zero = Point2d(xp / w, yp / h);
6.         Point2d image_pos_projected = Point2d(image_pos_to_zero.x * 2.f - 1.f, image_pos_to_zero.y * 2.f - 1.f); // Pass in range [-1 ,1]
7.         ptI_trick.push_back(image_pos_projected);
8.     }
9.
10.     //We interpret the -1 in element (3,3) of the projection matrix as a change of sign of Cx and Cy
11.     projectionMatrix.at<double>(0, 2) = -1. * projectionMatrix.at<double>(0, 2);
12.     //projectionMatrix.at<double>(1, 2) = -1. * projectionMatrix.at<double>(1, 2); //This is commented because we romoved the flip of the y-axis
13.     projectionMatrix.at<double>(2, 2) = -1. * projectionMatrix.at<double>(2, 2);
14.
15.     Mat rvec, tvec, R, G_w2cf;
16.     vector<float> dist(0);
17.     solvePnP(ptW, ptI_trick, Mat1d(projectionMatrix, Range(0, 3), Range(0, 3)), dist, rvec, tvec);
18.     Rodrigues(rvec, R);
19.     hconcat(R, tvec, G_w2cf);
20.     double lastRow[4] = {0, 0, 0, 1};
21.     G_w2cf.push_back(Mat1d(1, 4, lastRow)); //This is the camera facing forward
22.     double G_cfcb_data[16] = { 1, 0, 0, 0, 0 ,-1, 0, 0, 0, 0, -1, 0, 0, 0, 0, 1 };
23.     Mat1d G_cfcb = Mat1d(4, 4, G_cfcb_data); //This is the matrix to change the camera from forward to backword according to HoloLens, hence 180° in x-axis
24.     Mat1d G_w2cb = G_cfcb * G_w2cf; //This is the right matrix to convert the coordinates for the HoloLens system of coordinates
With rotation and translation from chessboard to camera, I get the chessboard points in camera coordinates. Then I apply the camera2world transformation (and correctly get positive z coordinates) and ...
Code (Cpp):
1. Mat1d world_norm = Mat1d(ptW.at(i));
2. world_norm.push_back(1.);
3. Mat1d tempValue = G_w2cb * world_norm;
4. //Calculate coordinates on camera system of coordinates
5. Mat1d camera_to_world_coord_mat = cameraToWorldMatrix * tempValue;
6. Point3d camera_to_world_coord_point = Point3d(camera_to_world_coord_mat.at<double>(0, 0), camera_to_world_coord_mat.at<double>(1, 0), camera_to_world_coord_mat.at<double>(2, 0));
plot with unity the four corners with this:
Code (CSharp):
1. sphere1.transform.position = new Vector3(-0.169188f,0.191340f,1.636722f); //A
2. sphere2.transform.position = new Vector3(0.424073f,0.178179f,1.610238f); //B
3. sphere3.transform.position = new Vector3(0.414258f,-0.198266f,1.577438f); //C
4. sphere4.transform.position = new Vector3(-0.179003f,-0.185105f,1.603922f); //D
We saw the result in two different ways: through HoloLens and through the preview of Microsoft HoloLens App for Desktop. We signed with red circles what we saw with HoloLens device and the small green ones are what we see through the Microsoft HoloLens App for Desktop (the top-left ones are coincident; therefore we did not draw the red one). Instead, the blue spheres are the right positions where the spheres should be.

Unfortunately, the spheres are not aligned in the desired positions - blue points (See the attached image). Also the result we see from the remote desktop app and what we really see with HoloLens device are different, but this is not the big issue. The most important thing is that the spheres are not centered in the corners.
Does anyone have any idea of this? Do you think the problem is with the estimation of the coordinates or with the rendering in Unity?

The misalignment is quite strong, much bigger than 10 pixels; I say this because I cannot believe this error falls inside this statement:
Thank you!

#### Attached Files:

• ###### Output.jpg
File size:
251.4 KB
Views:
844
Last edited: Oct 12, 2016

### Unity Technologies

Joined:
Sep 17, 2015
Posts:
558

Last edited: Oct 14, 2016

Joined:
Oct 11, 2016
Posts:
2