Search Unity

Question Using MARS for well-defined AR expereinces

Discussion in 'Unity MARS' started by Amyd80, Jun 2, 2020.

  1. Amyd80

    Amyd80

    Joined:
    Jun 2, 2020
    Posts:
    2
    We were kind of hoping MARS is somewhat different than what it turned out to be - but perhaps I'm missing the point right now.

    As far as I can see, most of the recognition & tracking logic is outside of MARS's scope, pushed down to the AR Foundation or lower level. So, if we wanted to create something like environment or scene tracking based on prescanned data (point clouds or meshed geometry), where the AR triggers & experiences are pre-positioned & pre-defined in advance, we would be (right now) out of luck, and would need to use a third-party framework, which probably doesn't quite plugin in MARS (for now) and/or MARS would give us little to no benefit.

    Or am I am missing something basic here?
     
    FutureSystems likes this.
  2. Jono_Unity

    Jono_Unity

    Unity Technologies

    Joined:
    Apr 5, 2016
    Posts:
    18
    Hey @Amyd80, there are a few things in MARS to help you do persistent / location-based AR like you're describing, and some more coming down the pipe.

    If you have a model / scan / other representation of your known space, you can configure that as a simulation environment, and then be working with that space in MARS in the Simulation view. Setting up a simulation environment is covered in this doc: https://docs.unity3d.com/Packages/com.unity.mars@1.0/manual/SimulationEnvironments.html
    To relocalize to your space (get your device lined up with the physical space), today the best approach is using image markers: if you're able to put an image in your space, or better yet use an existing poster or such, then you can configure a simulated version of that marker in your simulation as well. This page covers working with image marker proxies, and setting up simulated markers in your environment: https://docs.unity3d.com/Packages/com.unity.mars@1.0/manual/Markers.html

    Looking ahead, we will be adding support for persistent anchors, at which point you'll be able to relocalize to your known space without the need of image markers. We have the foundation of this support, but currently don't ship with providers for this functionality. It's on our radar as a high-priority ask, and we're building support now.
    We'll also be shipping the MARS Companion Apps later this year, which are phone & HMD apps specifically built for capturing this kind of environment data to bring into the Editor and make this workflow more straightforward.

    Thank you for the feedback, please keep it coming :)
     
    herbrush and FutureSystems like this.
  3. Amyd80

    Amyd80

    Joined:
    Jun 2, 2020
    Posts:
    2
    Thanks, that does sound quite a bit more promising.

    Just two questions: in using scan data in the simulation view right now, I assume it doesn't "automagically" work with third-party frameworks like Vuforia or Wikitude, right? So, you can't actually test the external recognition of those frameworks within the MARS environment? I at least couldn't figure out a way to do that on a cursory try. I guess this is something that they need to implement on their side of things, to plug into MARS?

    Secondy: can you talk a bit more about the persistent anchor functionality? That does sound like something that would be highly interesting for our current projects. Would this be as "simple" as converting an existing point cloud or mesh to a format that the MARS provider can work with (or alternatively using your companion apps to scan/record), or will it be more involved?
     
  4. Jono_Unity

    Jono_Unity

    Unity Technologies

    Joined:
    Apr 5, 2016
    Posts:
    18
    Vuforia and Wikitude don’t currently have provider integrations into MARS, so for example an image marker proxy set up in MARS won’t work automagically using those services at runtime. But, in the Editor, we use ‘simulated’ providers instead of the actual runtime provider anyway, so you can simulate generic image marker tracking in Editor, and then use Vuforia or Wikitude to do your runtime tracking (which would be the typical Vuforia or Wikitude object setup, not a MARS proxy).
    Writing providers for either of those frameworks would simplify this, because a MARS-style image marker proxy would then also work at runtime and not require a separate setup. In some cases, you or we can write these providers; in others, it does require some adjustment from the framework developer. This page gets into writing providers (under ‘Providers’): https://docs.unity3d.com/Packages/com.unity.mars@1.0/manual/SoftwareDevelopmentGuide.html#providers
    Let us know if you’re interested to dig in and we can provide guidance; otherwise, good to know which providers you’d like to see next :)

    About persistent anchors, that functionality is platform-specific and requires capturing the space on the platform - you can’t convert an existing point cloud or mesh into a persistent anchor. So yep, that’s what that feature of our Companion Apps is about: with the app, you scan/record the anchors of your space, and can bring those into your project to relocalize against. Now that our initial version of the MARS Editor extension is out, we’re working on getting these Companion Apps to you as soon as possible (later this year).
     
    jmunozarUTech likes this.
  5. MOlanders

    MOlanders

    Joined:
    Oct 1, 2020
    Posts:
    1
    Hi Jono_Unity,

    You wrote this:
    If you have a model / scan / other representation of your known space, you can configure that as a simulation environment, and then be working with that space in MARS in the Simulation view. Setting up a simulation environment is covered in this doc: https://docs.unity3d.com/Packages/com.unity.mars@1.0/manual/SimulationEnvironments.html
    To relocalize to your space (get your device lined up with the physical space), today the best approach is using image markers: if you're able to put an image in your space, or better yet use an existing poster or such, then you can configure a simulated version of that marker in your simulation as well. This page covers working with image marker proxies, and setting up simulated markers in your environment: https://docs.unity3d.com/Packages/com.unity.mars@1.0/manual/Markers.html

    Can you please explain some more on how to do the relocalize with an image?
     
  6. Jono_Unity

    Jono_Unity

    Unity Technologies

    Joined:
    Apr 5, 2016
    Posts:
    18
    Hey MOlanders, sure thing, here's how we do it in our location-based demo. We have a photogrammetry scan we made of San Francisco City Hall, which we set up as a simulation environment following the process in that doc link. Then we've added to that environment a synthetic image marker (most easily done via Window -> MARS -> MARS Panel, and then under the Create / Simulated headers, 'Synthetic Image Marker', and then configure it to the desired image in the inspector, same as an image marker proxy). We have the marker laid out in a central location in the sim:
    upload_2020-12-9_12-35-7.png
    The implication here is that this marker is really at that exact location in the real location - this is why I suggested using an existing poster or other such 'permanent' marker.

    Now that we have the simulated setup, the other side of the coin is authoring the proxy that you'll actually deploy in your scene/app. Here's the scene we use in conjunction with the sim environment above, with a bunch of content under this marker proxy, so we can position things absolutely around it:
    upload_2020-12-9_12-41-15.png

    That gets you most of the way there -- you should now be able to lay content out relative to the space by positioning it relative to the marker.

    What's missing now is any occlusion from the real environment. This may or may not be important depending on the nature and scale of your app, but we solved it via that child you see in the hierarchy above, 'MemorialCourt_Reference' -- this is the actual photogrammetry scan again, positioned so that it will line up with the real location, meaning that we can then either 1) render a stylized version of the real space over itself, or 2) apply an occlusion shader (write depth only, not color) so virtual objects don't render through real buildings. In a smaller space we'd recommend using a Plane Visualizer to do this occlusion, but plane finding isn't viable in large outdoor scales.

    The Rules (MARS 1.1) webinar a bit ago covers this use case (among others), in the context of the Rules feature - I think I've hit the main points here, but if you prefer video, give it a look :) https://create.unity3d.com/mars-rules-webinar

    Let me know if this makes sense & works for you -- thanks!
     
  7. dariuspowell

    dariuspowell

    Joined:
    Jun 11, 2015
    Posts:
    28
    Hi.

    I'm glad I found this post as it may offer a solution to a problem I have creating an AR trail during lockdown.

    If a 3D environment is loading in using the image marker method, how conisistant will the line up be once the user moves away from the image?

    Is it possible to bring in ARworldmaps into data Mars? Or if not, if I triggered an ARworldmap when image marker is recognised would that help stablise and anchor the 3D environment?

    In a nutshell, I have an accurate scan of a church interior that I want to load in place when user scan image maker. Then as they freely move around in the physical, the virtual space remains aligned.

    Any advice or confirmation would be appreciated!

    Thanks
    Darius
     
  8. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    297
    hello @dariuspowell

    Image markers will work well while the app recognizes them. The thing is, if you get far away enough, it will get to a point where the app will stop recognizing them and things might not work.

    To compensate for this, you could use several image markers placed across the environment and if you know before hand the distance from the markers you can still calculate the position and alignment of the environment.
    The important part would be to always have at least one image marker recognized at all times.

    With regards to ARWorldmaps. MARS Will work with ARFoundation; you might want to check the ARF examples (https://github.com/Unity-Technologies/arfoundation-samples), specifically the ARWorldMapController.cs which performs the logic in that example.

    Do bear in mind that ARWorldMap is an ARKit-specific feature.
     
  9. dariuspowell

    dariuspowell

    Joined:
    Jun 11, 2015
    Posts:
    28
    thanks @jmunozarUTech for the reply.

    That's what I thought would be the case but I've also seen what @DanMillerU3D prototyped a few years back, https://twitter.com/danmillerdev/status/1100472988778917888?lang=en

    This seems to show that you could use an image target to launch the AR content and once tracking of image is lost, the world tracking would kick in?

    Also in @Jono_Unity 's example above it shows an image marker as the trigger. The image wouldn't be in view while the user moves around the real environment.

    I feel I'm close to coming up with a solution. Any support would be appreciated.

    thanks
     
  10. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    297
    ahhhh ok ok, now I understand what you meant,

    yes you could definitely do it through that approach. Get the initial anchor with an image target, place the augmented content based on that anchor and start your experience.

    The thing is that you might get some drifting depending on the device you use and for how long you do it. This happens since the SLAM (positioning algorithmss) used on ARKit / ARCore are not closed loop (for performance reasons).

    Hence the reasoning on my first post when mentioned that several images would "Re-anchor" your content in case some drifting happens.

    Being said that, try it out. I dont see why it would fail, but do keep in mind that there might be a little bit of drifting :).

    For more info about closing the loop check this out https://blog.rabimba.com/2018/10/arcore-and-arkit-SLAM.html, a little bit old but worth the read if you want to understand whats happening under the hood
     
  11. merrythieves

    merrythieves

    Joined:
    Apr 28, 2021
    Posts:
    14
    Sorry for the dumb question: but what exactly is this demo showing? How is it supposed to be used? More importantly: does it show us how to do something that can actually be done?

    The theory seems to be that a real person in this real version of this environment could walk up and scan a marker that is in the real world. But in the example, the marker is the size of a swimming pool for some reason. Who is supposed to be scanning this code and where are they supposed to stand? Or does this actually only work in the Unity editor player under perfect conditions?

    Assuming a person were to get the code scanned somehow, we'd hope they can walk some yards away to a light pole and see a virtual light correctly aligned to it. But it sounds like from what I'm reading here that this is not possible and you'll need to actually align everything to nearby image anchors. So MARS is providing a slightly better interface for working with all of this, but it doesn't seem like it's actually doing anything more than image anchors under the hood?

    It would be immensely helpful to have a demo of this system working for this use case. This would inform us a bit better than the demo which seems to be more theoretical than functional.
     
    Last edited: Jul 20, 2021
  12. Jono_Unity

    Jono_Unity

    Unity Technologies

    Joined:
    Apr 5, 2016
    Posts:
    18
    Hey samgarfield,
    You're right that the settings on the marker here are not what you'd really want to go deploy this experience - yep, you would for starters want to use a reasonable marker size; it's blown up here mostly for readability in the demo.

    You've hit on something that we've been planning to improve about this demo: for a space as large as the one shown here, really you wouldn't want to rely on a single marker, but use a few markers spaced around the location, and use each marker you find to refine the pose of the matched scene, rather than doing a one-to-one matching like how it's set up right now. Like you pointed out, using a single marker in a large space would inevitably lead to large drift in the content as you move farther away from that marker.

    It sounds like you're working on things which would really benefit from the improvements we've been working towards for location-based experiences -- if you're open to it, we'd be happy to jump on a call and talk through your project and how we can nail supporting what you need in our next updates :)
     
    jmunozarUTech likes this.
  13. CreepyInpu

    CreepyInpu

    Joined:
    Oct 9, 2014
    Posts:
    23
    Hi !

    Is it possible to replicate the functionalities of Vuforia's Area Target ? With Area Target, we can simply import a Matterport scan of a room in Unity and place 3D objects in it. Once the user is inside the room, the tracking works really great and the 3D objects are positionned accordingly to where they we placed on Unity, and no need so scan a marker or something. The only downside is... the price. It's something like 24k$ per year.
     
  14. Jono_Unity

    Jono_Unity

    Unity Technologies

    Joined:
    Apr 5, 2016
    Posts:
    18
    Hey CreepyInpu, alas, we don't support Vuforia Area Targets out of the box in MARS, though if you're particularly motivated, it would be possible to write a provider (http://docs.unity3d.com/Packages/com.unity.mars@1.3/manual/SoftwareDevelopmentGuide.html#providers) for it.
    In a future release (can't give a specific timeframe at the moment), we intend to provide a general purpose persistent anchor workflow which would give you a similar authoring experience to what you're describing.
    For our understanding about what you & others need:
    - Would an intermediate solution which is not cross-platform be helpful to you? In other words, in your project, are you targeting one platform or multiple?
    - Are you currently using Vuforia Model Targets or another solution (Apple WorldMaps, Azure Spatial Anchors, etc)?

    Thanks!
     
    jmunozarUTech likes this.
  15. EVASNGULAR

    EVASNGULAR

    Joined:
    Mar 27, 2020
    Posts:
    1

    Hi,
    Thank you very much for the information, a question: Have the persistent anchors been implemented in the current version of MARS?
    upload_2021-10-18_11-50-43.png
     
  16. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    297
    hello @EVASNGULAR they have not. Best would be to implement your own provider for it.
     
  17. diodedreams

    diodedreams

    Joined:
    Dec 26, 2018
    Posts:
    10
    Hi @jmunozarUTech, speaking of updates, is MARS still in active development? In other words, does Unity have some kind of public Roadmap or vision, with planned features to be released in the coming months?

    The reason I ask, is that I just signed up for a year of MARS, and I'm pretty underwhelmed with the lack of information and tutorials/videos available online. It seems that only a few videos (mostly overviews) were released in Summer 2020, and there hasn't been major news since, beyond version 1.3 coming out and a Companion app that's still in Beta.

    Will there ever be a way to use MARS with Microsoft Mixed Reality Toolkit (MRTK) or Azure Spatial Anchors, or have a world locking coordinate system, like MRTK has now?
     
    jmunozarUTech likes this.
  18. Jono_Unity

    Jono_Unity

    Unity Technologies

    Joined:
    Apr 5, 2016
    Posts:
    18
    Hey @diodedreams, yes! MARS is still in active development, with 1.4 closing out QA now ahead of its release very soon. That update will add major new meshing functionality, along with a ton of smaller improvements and fixes. The general Unity XR roadmap is here - https://unity.com/roadmap/unity-platform/arvr - and does have a MARS tab showing 1.4/meshing, though I'm seeing now that we haven't publicly stated next major features after 1.4. I can't say too much in this venue, but to your question and the topic of this thread, yes, persistent / spatial anchors are a huge priority for us going forward!
     
    diodedreams likes this.
  19. stereocorp3d

    stereocorp3d

    Joined:
    Nov 15, 2021
    Posts:
    2
    Hello guys, I've been excited about where Mars could/would be heading to for quite some time and I've been following closely your progress.
    The fact that Azure Spatial Anchors and Vuforia Area target have both offered solutions for large scale user positioning while offering tenfold tracking improvements makes me wonder if Mars might just be missing the boat.
    With Lighthouse SRDK more recently offering yet another alternative to these approach one might wonder what is hold up on your end and how many competing solutions you're willing to see take the lead...
    @jono I think you're right to have made persistent/spatial anchors in Mars a huge priority, but unless it comes out in the next couple of month I'm afraid all that hard work might have been for nothing. I would truly be sorry to see that happen as I think Mars had some great features in the beginning.
     
  20. diodedreams

    diodedreams

    Joined:
    Dec 26, 2018
    Posts:
    10
    @stereocorp3d I completely agree with your sentiments. I have buyer's remorse for committing myself to a year of MARS, given Microsoft's Azure Spatial Anchors and Mixed Reality Toolkit (MRTK), as well as Vuforia's Area Target are all robust solutions but are not addressed by MARS.

    @Jono_Unity On November 1st, you said that "persistent / spatial anchors are a huge priority for us going forward". Any update on this? The Roadmap hasn't seen any love since then: https://unity.com/roadmap/unity-platform/arvr
     
    Last edited: Dec 28, 2021
  21. ekeegan

    ekeegan

    Joined:
    Jul 4, 2012
    Posts:
    7
    Apologies if this solution exists somewhere and I've missed it somehow. Is there no workflow that makes use of a USB device hosting from a local machine instead of deploying a standalone app or acquiring expensive hardware (Hololens & Leap). I see there is face tracking using the camera, but this same feed can't be used for something like the platformer example with proxy surfaces? Am I missing a windows integration? Appreciate and guidance here!
     
  22. CiaranWills

    CiaranWills

    Unity Technologies

    Joined:
    Apr 24, 2020
    Posts:
    199
    @ekeegan that sounds like remoting, which is currently on our roadmap.
     
  23. Bero

    Bero

    Joined:
    Oct 16, 2012
    Posts:
    5
    Hi,
    I'm using MARS. I want to use it to make indoor navigation using image markers. I have building blueprint and I put image markers from a real wall on exact position on a map. Let's say that ImageMarker1 is at position 9,0,3 meters and ImageMarker2 is at position 6,0,3 meters (difference 3 meters). Everything works as expected by MARS (image markers are recognized and 3D models are overlapped). However I would like that when I'm in front of ImageMarker, that camera position is relocated at exact position in front of it, so that my camera location matches my position in a map and real world. Problem is that camera position is in some other space/coordinate system (relative) and it is not possible to use it for absolute position in building (or I don't know how to do it). Can I somehow force relocation of camera to some absolute point on the map? When I start app in MARS, how it does orientate axis on start? Towards north? Or towards initial marker image scan? Thank you
     
  24. jmunozarUTech

    jmunozarUTech

    Unity Technologies

    Joined:
    Jun 1, 2020
    Posts:
    297
    Hey there,

    Unfortunately the MARS Camera is not possible to forcibly position it since it gets auto positioned against the world depending on the platform you are using (android /iOS). Being said that; with some math instead of repositioning the camera you could reposition all the objects to give you a feeling of the camera being re-positioned. (yet the camera feed will remain the same)