Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Question Using Unity XR SDK to build my own AR Plug-in

Discussion in 'AR' started by wwchanninglab, Dec 26, 2022.

  1. wwchanninglab

    wwchanninglab

    Joined:
    Oct 10, 2022
    Posts:
    4
    I am currently trying to create a new AR Plug-in, like ARCore or ARkit, using Unity XR SDK. From my understanding, Unity XR SDK should be used when creating a dynamic linking library used in AR plug-in package, but the official documentation from Unity provides limited information on this. I was wondering if anyone has any relevant experience to share or knows where I can find more information. It would be even better if there are specific examples I can refer to. Thank you.
     
  2. KyryloKuzyk

    KyryloKuzyk

    Joined:
    Nov 4, 2013
    Posts:
    1,128
    Yes, the documentation is rather scarce.

    Currently, three AR Foundation subsystems have to be written in C++: XRInputSystem, XRMeshSubsystem, and XRDisplaySubsystem. You only have to use the XR SDK if you wish to implement those in your AR provider. All other subsystems can be implemented from the C# side.

    Also, supporting the XRInputSystem is only necessary if you wish to support Input Manager (Old). If you're targeting only for Input System (New), then implementing XRInputSystem is not necessary.

    You can find examples of XRInputSystem and XRDisplaySubsystem by requesting the XR SDK.
    There are not many examples of XRMeshSubsystem, but this one helped me a lot: com.unity.xr.openxr@1.2.8. Also, I just found another one inside the com.unity.mars-ar-foundation-providers@1.4.1.

    Unity XR SDK has a collection of C++ headers that you have to use to compile your native library. The hard part for me was finding a cross-compiler to compile the library for all target platforms.
    XR SDK comes with Bee compiler, but I wasn't able to get it running. Instead, I found a hidden gem inside Zig language. It can cross-compile from any architecture to any architecture, which is sick.
     
    wwchanninglab and andyb-unity like this.
  3. andyb-unity

    andyb-unity

    Unity Technologies

    Joined:
    Feb 10, 2022
    Posts:
    1,025
    On the C# side, note that you will also need to register subsystem descriptors and implement an XR Loader. I wrote up the key steps here: https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@5.0/manual/implement-a-provider.html

    See the ARKit and/or ARCore plug-ins for examples. Specifically:

    ARKitLoader.cs : This is the ARKit XRLoader implementation you can use as a reference when implementing your XRLoader
    UnitySubsystemsManifest.json : This is how you tell Unity that you implemented Meshing and/or Input subsystems
    ARKitPackageMetadata.cs : This is how you tell Unity your supported build targets
    ARKitSettings.cs : The XRConfigurationData attribute is how you render settings under Project Settings > XR > <Your Provider>
     
    KyryloKuzyk likes this.
  4. andyb-unity

    andyb-unity

    Unity Technologies

    Joined:
    Feb 10, 2022
    Posts:
    1,025
    If you come across more specific questions feel free to open new threads in this forum
     
  5. wwchanninglab

    wwchanninglab

    Joined:
    Oct 10, 2022
    Posts:
    4
    Thank you, this is really helpful.

    My AR plug-in plans to implement face tracking and body tracking functions. It looks like I won't need to use the XR SDK, but now I have a new question. How can I obtain the frames captured by the camera in order to perform calculations on the face detection algorithm located in my dynamic linking library, if I was not using any Unity XR SDK functions?

    Thank you again for your reply. It helps a lot.
     
  6. andyb-unity

    andyb-unity

    Unity Technologies

    Joined:
    Feb 10, 2022
    Posts:
    1,025
  7. wwchanninglab

    wwchanninglab

    Joined:
    Oct 10, 2022
    Posts:
    4
    Thank you for taking the time to respond promptly. It is quite helpful.

    I know how to get image in C# now, but what I concerned is that how face tracking function in dynamic linking library get the camera image? Using ARCore for example:


    On line 51, the "UnityARCore_faceTracking_TryGetFaceData" located in the ARCore dll is obtained, and on line 30, this function is used to get the related face mesh data.
    However, this function does not get any camera image data, so how does the face tracking algorithm of ARCore calculate these face mesh data?

    All I want to know is that how the functions in the ARCore dll get the camera image to calculate the result, are there any key points that I have missed?
     
  8. andyb-unity

    andyb-unity

    Unity Technologies

    Joined:
    Feb 10, 2022
    Posts:
    1,025
    Our ARCore DLL does not contain a face tracking implementation. Face tracking is implemented at the platform layer within ARCore itself (ie, https://developers.google.com/ar/develop/java/augmented-faces/developer-guide). Google gives us tracking data but does not expose the camera image via the face tracking API.

    To get the camera image for yourself at the application layer for your own tracking implementation, you must create an XRCpuImage following the steps in my post above.
     
  9. wwchanninglab

    wwchanninglab

    Joined:
    Oct 10, 2022
    Posts:
    4
    Thank you! I know how to do it now. This is really helpful!
     
    andyb-unity likes this.