Search Unity

[AR Foundation / General AR] Possible to have classifications on plane for responsive experiences ?

Discussion in 'AR/VR (XR) Discussion' started by steven073, Aug 7, 2019.

  1. steven073

    steven073

    Joined:
    Dec 1, 2015
    Posts:
    3
    Hello all,

    First, apologize me for my French - English language. ;)

    The AR domain grown so quickly and some tools appears but some essentials questions appears too.

    To summarize my request, I am looking for a simple and effective way to differentiate the plans detected by the AR Foundation tool.

    I would like to be able to differentiate these plans according to the following types according to the surface that is detected by the AR experiment: 'Ground', 'Table', 'Wall'.

    The final idea is to be able to propose a responsive AR experience according to the surface on which the 3D model is positioned (smaller on a table, and bigger on the floor).

    On the official documentation of Apple ARKit, we would say that this concept exists, it can also be present in Unity (see Apple source: https://developer.apple.com/documentation/arkit/arplaneanchor/classification)

    However, I do not see how I can transpose this concept to Unity because the AR Foundation framework does not offer it yet.

    Should we set up an AI system that analyzes the image to locate the placement ?
    Can we use the concept of distance between a plane and the camera ? (This case is problematic because we have to have at least two plans with different heights to allow comparison and coherence for all places).

    I hope someone will enlighten me.

    Thank you all in advance !