Hi, is there a way to prevent the user from detecting more that one plane? I would like to user to be able to scan their floor and for the plane to expand but not detect a table surface if it accidentally gets in the way. Thanks, Adam
What I have been doing is this: There are a few examples and tutorials out there raycasting to the plane. Raycast to the plane(s) as they are being discovered. You can have the user position a "shadow" of a game object to get positioning. Once you have them pointing at the floor, raycasting to that floor plane you want, I put a fake transparent plane with shadow shader prefab at that position and rotation, then disable all plane detection. Now all physics objects or whatever are all interacting with the floor only prefab plane (I just use a large plane so it's as if its infinite.) The AR plane is supposed to have an infinite plane property but I could not get it to react with anything. So I just put a "fake" plane and I'm happy w that.
There is no way to distinguish a floor from a table in AR Foundation. I can suggest comparing all detected planes by vertical position and draw only the lowest one (there is a high chance it will a floor).
Actually, there is See ARPlane.classification. However, it is currently only supported by ARKit. We have a sample showing how to use it here.
Oh, I didn't think about that feature! But it's only support by ARKit... ARCore is so backwardness...
Yes, you're absolutely right, I forgot about ARKit plane classification. I should also say that plane classification is only supported on iOS devices with A12 chip or newer.
Hi just to follow up on this. Its not that i am looking to differentiate from the floor and a table. If i am scanning the floor and have a white rug in the middle, it might not beable to detect it and extent the plane but when I get past the rug, it starts detecting the floor again and makes two planes. This is what i would like to prevent.
I have the same issue, I might get a few "chunks" of floor as different planes, I treat this as an initial "scanning" time frame for the user, and ultimately choose only one of the planes to base my floor off of. Then I put an invisible prefab plane in place using the chosen AR planes transform, and stop call plane detection. result is one plane parallel with the real floor
Thanks for the clarification. That sounds like an artifact of the plane detection algorithms used by the underlying AR framework (i.e., ARCore/ARKit). Unfortunately, there's not much we can do about that. ARCore does have a concept of one plane being merged into another to account for this case (see ARPlane.subsumedBy), but ARKit does not. What will likely happen is that it will see two separate plans, and then at some point remove those and create a single larger plane. I think @Treecrotch 's suggestion is a good solution: