I say this with utmost respect to the team, because what MARS can achieve is what, as a filmmaker, I've dreamed about the past 8 years. (been with creative use of AR since 2004). As a paid product - with no ad hoc subscription option - it's got glaring omissions and shortcomings (imho) - No proper documentation from an easy-to-author point of view. Entire sections can't be found; Actions? Conditions? - No proper examples: very rudimentary templates with no explanations (eg: navmesh sample) - No Video tutorials with a break down of highlevel concepts---> to mid level (what do the each of the MARS subcomponents do - the mouseover tooltips are vague/incomplete) Tracking Issues: - is not robust as native ARcore or Arfoundation, imo. - MARS "forgets" the initial placement of objects in the real world. See this video: It's bad enough in the Simulation view, in the real world on a device it's way more chronic- I'm testing on a Samsung Galaxy S8, a 4yr old phone that supports Arcore depth api - so as to maintain some semblance of what general compatibility would be for Android. - To make things worse, There's random red errors thrown up in Unity. Being a non-coder it's hard to debug these - and it shouldn't be so frequent, for a paid product. This makes it feel like MARS is yet in beta. (I'd pointed out the Gui errors being thrown up in a previous thread). This video shows it kick in at around 45 seconds: - These errors, to a non-tech author/user bring serious doubt to what it might be doing to actual devices (overheating, battery life, endless loop that affects placement of objects? etc.) Given the above issues, I'd honestly say, i'll invest another few days to evaluate the benefits of paying for MARS versus doing a project with ARFoundation. The lure of MARS is simple for me: Cinematic AR films. Typical scenario: User walks around room and a runtime NavMesh gets built for "actors' to walk on. Primitive furniture such as table and chair conditions get recognized and if the "actor' walks into such a trigger zone, transitions to sitting down, picking up a book etc. I suppose a custom coding project, to understand Camera height/plane height could do simple logic of tagging "floor" . Table, chair, and in looking at MARS I was hoping to accomplish so much more. Yet, if the basics of Objects popping in and out of view or worse, if you walk a few steps with the phone, turn around and MARS has "forgotten" where the object was placed... it's hard to justify attempting more moderately involved scenarios for MARS to solve. In comparison - Google's AR search results are more robust - Place a tiger in the room, almost instantly locks to the floor, walk around and occlusion is near instant, and no "forgetting" the location even if you walk into another room and back. Recommendation: A budget needs to be put into place - to create proper documentation (on an emergency basis) - Video tutorials (the webinars, as good as they were, were fraught with pacing issues, and non-linear tidbits being touched on but not fully explained.) - More templates with video tutorials going with them. Hoping this is taken the right way, and is helpful fwiw. Kind Regards.