Search Unity

Polarith AI (Free/Pro) | Movement, Pathfinding, Steering

Discussion in 'Assets and Asset Store' started by Polarith, Apr 18, 2017.

  1. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    That's great! We can provide you support via mail, Skype, Discord or directly here in the forums. Just shoot us a message and we'll be there.


    Martin from Polarith
     
  2. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    I feel a little bit like I'm missing out by not using more than 2 objectives (attraction and repulsion/interest and danger.) With a standard 2D steering context, what would be a common example of a third (or more) objective?

    BTW: I've found this article from Game AI Pro 2 to be a really good intro to context steering. It does a really good job of pinpointing the issues of classical steering and how context steering solves the solution differently/better while keeping the advantages of such a simple approach. Might be nice to add that link to your documentation? Not to say your documentation is bad, but I feel they just describe the why and how really well.
     
    Polarith likes this.
  3. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    he
    Indeed, there are several different scenarios where especially a third objective would be come into handy. In general, whenever you want to combine the movement direction with a separated orientation mechanism, a third objective is the thing for you. This is especially important for moving swarms of agents.

    Imagine you want to create a boid, then it is always a balancing act between moving somewhere as an individual (interest/danger) and adapting to the alignment of surrounding characters (the third objective, maybe named alignment or swarming). With the third objective constraint, you have a great instrument for weighting between the individual and swarm movement. In 1.6, we will roll out the boid scene that you can see in the latest feature update trailer. In there, we also used a third objective constraint to easily achieve such effects. This way, we do need a minimum amount of behaviours and, thus, save a lot of performance.


    Back in days at the university, I wrote a paper respectively my master thesis about this topic showing the issues even Fray's context steering approach still have and how to overcome these as well (which we actually implemented in Polarith AI, of course).

    Thank you very much for the hint. I always thought the mathematical details are too overwhelming/boring for most people (whereby being the most interesting thing for me as a mathematician). Now, I think it would be reasonable to link to the article and, on top of this, to make a decent video on YouTube explaining the differences in detail.

    As one of our early adopters, I have got a question for you: What feature do you want the most for the next patches? Always nice to read from you.


    Martin from Polarith
     
  4. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    I've previously handled orientation with a custom script but now I'm wondering why I hadn't tried using a third objective for orientation-specific scoring. Looking forward to the boids btw, I'd also be curious to see how Polarith AIM performs vs a dedicated flocking/boid algorithm implementation.

    I'm actually quite happy with the asset as it is. The tutorials are also pretty good. However, I'd love to see some more 'exotic' (compared to classic steering & boid behaviour) applications. That would be great inspiration for getting the most out of this package. Working in a small team with lots of external code/assets there's often not really enough time to get really familiar with many frameworks, especially if they're a bit more complex. So I feel like there may be a lot of other cool things I could pull off with this package with reasonably low effort by using the components more creatively.

    With that in mind: I'm using custom sensors and a storage system for observed data. This is mostly necessary as other AI systems also access the data, but it additionally enables e. g. ray casts for line of sight checks. This means that each agent gets its own AIM environment for its perceived dynamic objects (e. g. allies and enemies.) Is that really the best way to handle this scenario?
     
    Polarith likes this.
  5. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Thank you, glad to hear that. :oops:


    We feel the same, but things will get better and better over time. Version 1.6 will introduce TinyWood and some other cool example scenes. More stuff will then follow with further patches. I'm looking forward to more "exotic" examples as well. :)


    Well, you can share environments across multiple agents. Maybe, you can explain your scenario a little bit in more detail so that I'm able to help you out with this issue.

    Oh, and sorry for the delayed answer. We've been really busy this week.


    Martin from Polarith
     
  6. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Essentially, my question boils down to: What is the most elegant way to handle per-agent visibility detection? I know, in a custom steering behaviour I could do angle checks for some more fine grained behaviour, but this is getting more difficult for e. g. visibility raycasts, which would ideally tick at a lower frequency than steering. So the options I can think of is one environment per actor (not sure how bad of an idea that is), setting GOs per behaviour (scales really badly I think, especially with a larger number of behaviours because the same data would be repackaged more often than necessary, right?) and finally a custom perception pipeline. Or am I missing something?
     
  7. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hey, @christoph_r!

    You've got most things absolutely correct.
    1. One environment per agent is most certainly not what you want because you would also need a steering perceiver for each individual and the whole setup would kill your performance.
    2. Using GOs directly, you said it best, would scale badly for large numbers of objects.
    3. You can use a cone-shaped sensor (built by using AIMPlanarShaper) to model an angle-based view per agent. Of course, this would not help you either if you do these checks for increasing performance and not for game logic.
    4. (Over-)Writing custom perception components is the way to go in terms of performance.

    To adapt the perception pipeline to your needs, you have to derive from both AIMPerceiver<SteeringPercept> and AIMFilter<SteeringPercept>. Even though everything is already documented, I think it will help a lot if I explain some basic concepts here.

    First, let's have a look at your custom AIMPerceiver<SteeringPercept>. Let's say that you derive a class named OcclusionPerceiver. Then, you have one method you need to override and one that you might use to initialize things (e.g. special scene structures to accelerate the query like search trees or raycast mechanisms). In the PerceiveEnvironment() method, you have to iterate all percepts of the given environment for both the layer GOs and the GO list. Before you get too puzzled over that, let me provide you a trivial example snippet.

    Content of a custom PerceiveEnvironment() method:
    Code (CSharp):
    1. Collections.ResizeList(
    2.     percepts,
    3.     environment.LayerGameObjects.Count +
    4.     environment.GameObjects.Count);
    5. for (int i = 0; i < environment.LayerGameObjects.Count; i++)
    6. {
    7.     percepts[i].Receive();
    8.     percepts[i].Received = true;
    9. }
    10. for (int i = 0; i < environment.GameObjects.Count; i++)
    11. {
    12.     percepts[environment.LayerGameObjects.Count + i].Receive();
    13.     percepts[environment.LayerGameObjects.Count + i].Received = true;
    14. }
    The actual important part is that you need to define your own method, e.g. QueryPercepts(Vector3 position, float angle, IList<string> environments, IList<SteeringPercept> percepts) or something similar. This method is later called from the OcclusionFilter you are also need to implement. The position is then the position of the current agent, the angle could be the maximum angle allowed for a percept to be visible (of course, you can also do raycasts in this method instead), the environments labels are used to filter the actual AIMEnvironments (specified per behaviour) and the percepts list is used to pass the valid percepts to the filter (per agent).

    The implementation of QueryPercepts(...) could look something like this:
    Code (CSharp):
    1. SteeringPercept percept;
    2. int offset = 0;
    3. IList<SteeringPercept> envPercepts;
    4.  
    5. // For each environment
    6. for (int i = 0; i < environments.Count; i++)
    7. {
    8.     // Get the environment
    9.     if (!Percepts.TryGetValue(environments[i], out envPercepts))
    10.         continue;
    11.  
    12.     for (int j = 0; j < envPercepts.Count; j++)
    13.     {
    14.         percept = envPercepts[j];
    15.         if (PassingCondition(percept)) // Your angle or raycast test
    16.         {
    17.             percepts[offset + j] = percept;
    18.             if (percept.Received)
    19.                 continue;
    20.             percept.Receive();
    21.             percept.Received = true;
    22.         }
    23.         else
    24.         {
    25.             percepts[offset + j] = null;
    26.         }
    27.     }
    28.     offset += envPercepts.Count;
    29. }
    Ok, now the last part: A corresponding derived AIMFilter<SteeringPercept> that actually calls this method. It needs at least a public field for your OcclusionPerceiver and, of course, the parameters you want to have per agent, e.g., an angle, a range for the raycast and maybe a tick rate. Hereto, you just have to override the GetPercepts(IList<string> environments, IList<SteeringPercept> percepts) method, whereby you need to resize the percepts list according to the perceiver.

    Here is an example snippet for resizing the percept list:

    Code (CSharp):
    1. int totalCount = 0;
    2. IList<SteeringPercept> envPercepts;
    3. for (int i = 0; i < environments.Count; i++)
    4. {
    5.     if (!SteeringPerceiver.Percepts.TryGetValue(environments[i], out envPercepts))
    6.         continue;
    7.     totalCount += envPercepts.Count;
    8. }
    9. Collections.ResizeListDefault(percepts, totalCount);
    Finally, call your special QueryPercepts methods and it should work then. Of course, you can improve many things here. For example, the percepts in the perceiver could be received only if they are relevant to at least one other agent. Therefore, the Received property must be false and you can lazy-receive the percepts in the query method then. (You may at least pre-receive things like the position since you will need it for your check). This way you would avoid copying a lot of unnecessary data to the backend. Furthermore, you could implement/override OnDrawGizmos() in the custom filter to visualize the rays, angles and ranges of your agents.


    Let me know if these information help you. All in all, this is an interesting topic. Last patch we already introduced spatial search trees to optimize the perception/filter components. One day, we might make your desired raycasts and angles working right out of the box as well.


    Martin from Polarith
     
    Last edited: Oct 28, 2017
  8. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Wow, that's quite the detailed reply, especially for a saturday. Talk about customer service.

    To get this straight: My problem is mostly related to QueryPercepts() (the documentation only mentions a protected GetPercepts() method?). My solution would be to assign globally unique IDs to every agent. This ID is also stored in the values list of that agent's SteeringTag. My custom perception system (not related to any Polarith code) then essentially keeps a list of IDs that are currently of interest to the agent. Finally, in the SteeringFilter, my observing agent simply checks every percept for its ID in the steering tag to see if it is of interest or not. Does this sound right?
     
    Polarith likes this.
  9. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    QueryPercepts() would be a 100% custom method you would newly define. Then, you would call it in GetPercepts() within your own filter. Since all this would be custom, there are no docs about it, of course. :)

    What about my suggestion to use a conic sensor for your scenario? That would be the most elegant out-of-the-box solution from an algorithmic point of view.

    Nevertheless, your suggestion would work as well. Such an application is exactly the reason why we've created the custom values in the AIM Steering Tag.


    Martin from Polarith
     
  10. giraffe1

    giraffe1

    Joined:
    Nov 1, 2014
    Posts:
    302
    Polarith likes this.
  11. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Woops. Sorry, must have skipped that part while typing up my reply. Okay, so I would query the percepts after resizing the list and set all irrelevant percept references to null?

    Right, well, I'm a bit hesitant to base the sensor system for my entire AI setup on Polarith for reasons of flexibility.

    Good to hear!

    By the way, I'm currently writing another controller to move an agent based on Polarith decisions - for some reason, the DecidedMagnitude of the context is always 1, despite a varying unconstrained objective (interest) value; there is only one constrained minimizing (danger) value. Is that expected behaviour?
     
  12. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hello and welcome @giraffe1.

    Well, this asset can be used without coding at all. This way you would have to stick with our parameters and pre-defined AI behaviours. Without coding, the use of our asset is a mixture between the right combination of behaviours and the correct tweaking of parameters. After a learning curve and when you've already gathered some experience with context steering, things should begin to work more and more intuitively.

    When you've already got a comprehensive (AI) infrastructure (like, e.g., @christoph_r has), then you would maybe need to have a closer look at our API for adapting the plugin to your needs. Of course, we'll help you with this, you would not be alone then.

    Concerning your link: I think that Polarith AI is capable of what you want to do. A great plus is that we've already integrated Unity's pathfinding so that your characters would obtain waypoints from Unity, but how these waypoints are followed depends on the steering which is done by our plugin. The steering approach continuously makes tactical decisions on movement per update depending on the parameterization. So when the steering decides not to go directly to a waypoint because of "reasons", a new path which matches the decided movement best can then asynchronously be computed on-the-fly.

    I'm here when you have further questions. You're welcome.


    Martin from Polarith
     
    Last edited: Nov 2, 2017
  13. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Absolutely correct. The important point is that you only set the percept of your agent (in the filter) to null and not the original one coming from the perception pipeline.


    I can understand this, and as long as you use the sensor for sampling the environment so that the solver is able to process its decisions, everything is fine. Regardless of what else is happening in your surrounding logic, using a cone-shaped sensor will not sacrifice any flexibility, but will lead to a better performance (fewer receptors etc.) and to a much more stable decision-making which works in harmony with your other code. Your surrounding AI and game logic can additionally do your desired angle and raycast checks anyhow like you've already described.


    Oh, my... So I had a closer look at the corresponding code and the documentation as well. It seems that we've totally failed the description for the property DecidedMagnitude which even tricked ourselves in some code examples. Well, the property does the correct thing: It actually returns the magnitude (weight) of the receptor which won the decision-making process. This is, of course, always 1 unless it was changed by the AIMPlanarShaper. What you're actually looking for is the DecidedValues list containing all the objective values corresponding to the receptor which won the decision-making. When your interest is the first objective, then you'll need DecidedValues[0]. Sorry for the confusion and the docs will be corrected with the next patch, too. Thank you so much for pointing this out. :oops:


    Martin from Polarith
     
    Last edited: Nov 2, 2017
  14. MrJBRPG

    MrJBRPG

    Joined:
    Nov 22, 2015
    Posts:
    40
    Man.. I am excited to try out the latest update when released. It has been so long since I last touched upon the AI that I got to brush up my skills again.

    I wonder what is the launch window for the update? Fall 2017 / Winter 2018?
     
    Polarith likes this.
  15. N00MKRAD

    N00MKRAD

    Joined:
    Dec 31, 2013
    Posts:
    210
    Is this suitable for an infinite world with moving objects?
     
  16. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    That should be fine. You just need to provide references to the moving objects to the Polarith environment. If you don't use a nav mesh, you'd probably also want to provide references to static obstacles and set them up accordingly (assuming you generate your infinite world procedurally.) However, now that unity supports runtime generation of nav meshes, you should also be able to create these after generation and feed them into Polarith.
     
    Polarith likes this.
  17. N00MKRAD

    N00MKRAD

    Joined:
    Dec 31, 2013
    Posts:
    210
    Well, my world gets constantly bigger as the player moves, so I'm not sure how real-time navmesh generation would perform.

    But yeah, references wouldn't be a big problem, I currently use OverlapBox to find new obstacles and store them in an array that gets cleared before every scan (otherwise it would get huge after a while).
     
  18. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Greetings from the darkest depths of the Polarith programming basement.

    Well, the long announced package/example overhaul is almost ready. There is only one big merge request left in the pipeline for improving Seek/Flee NavMesh so that it works with additional ray casts. We've already uploaded new and cool tutorials but hold back their release on YouTube until the new package is live. :)

    The following patch which will come with inbuilt 2D and 3D formation components and a handy master component is almost ready as well. So we can release that fast after the package overhaul.

    Concerning full 3D context steering: The end of 2017 respectively the beginning of 2018 is still on target. The 3D interpolation (spline-based surface interpolation, spherical harmonics, etc.) got us stuck so that we decided to release the first iteration of 3D context steering with a decent and fast 3D controller for achieving smooth movement instead of an interpolation approach which would kill performance when used for multiple agents.


    Martin from Polarith
     
    MrJBRPG and one_one like this.
  19. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    @christoph_r is right (we're so thankful to have him here). Since you can dynamically adapt the AI environment to your needs, your issue should not be a big deal. One hint concerning this topic: Try not to change the environment sizes so often during runtime but, instead, use null-references when no objects should be present. This way you can prevent expensive re-allocations within our system through some kind of inbuilt pooling mechanism.


    Martin from Polarith
     
    one_one likes this.
  20. N00MKRAD

    N00MKRAD

    Joined:
    Dec 31, 2013
    Posts:
    210
    I've checked the docs, but I couldn't find out how to add obejctives (dangers).

    How are they detected? I can only add text, no tags, layers or GameObjects.
     
  21. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    It seems that by the term "objective", we mean something different than you might think. With "objective", we name the discrete sensor function, the AI samples with the help of context steering, which is the basis for all movement decisions.

    Then, there are objects/percepts which can be obstacles/dangers. A collection of these we call (AIM) Environment. So, if you want, you might have a look at our beginner tutorials Get Started and Perception Pipeline which introduce you to our perception pipeline. Our perception pipeline and the Environment component let you procedurally add objects through the API. You can achieve what you want either by assigning a layer to your objects or by adding them to the AIMEnvironment.GameObjects list.


    Martin from Polarith
     
    Last edited: Nov 23, 2017
  22. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Nice!

    @N00MKRAD From my experience, you won't have to change much about the objectives if you start with the basic tutorials. The really interesting stuff to play around with (at least at first) are the steering behaviours, their ranges etc. I've found starting with the tutorials and starting to mess around from there to be quite enlightening over what's possible. It's also good for figuring out which behaviour configurations can fit your needs.
     
    Polarith likes this.
  23. Cartoon-Mania

    Cartoon-Mania

    Joined:
    Mar 23, 2015
    Posts:
    320
    This is a little stupid question. But I am looking for a more comfortable solution. When you choose food in The Sims, the character goes to the refrigerator. Objects such as refrigerators can be installed and removed at any time. Objects can also be moved. When I select an action from the menu, I find a solution that makes it easy for the character to find the related object. Will your assets help?
     
  24. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hey, @Cartoon-Mania. For steering (with or without pathfinding) your character to the position of the food, you can use Polarith AI of course. But for modeling the action "take or move food", you need to integrate your AI with a state machine or something similar triggering the appropiate animation and event.


    Martin from Polarith
     
  25. Cartoon-Mania

    Cartoon-Mania

    Joined:
    Mar 23, 2015
    Posts:
    320
    Thank you for your quick reply. But I think I need to add some more questions. I want to add objects dynamically, move objects, and remove objects. A character searches for an object. I go to the refrigerator or go to the desk. Similar to The Sims. I want something easier. Easily add, move, or remove objects. The character should be able to easily find the objects I have placed.

    Take the refrigerator as an example.

    There are several characters. There are also several refrigerators.

    The character who feels hungry goes to find the refrigerator and eats food.

    What I want is that I can easily add objects like a refrigerator and let the character find the refrigerator.

    Like a refrigerator, a character goes to his desk, looks for a treadmill, or looks for a TV.

    From a programmatic point of view, you should be able to add objects such as desks to your game. And the character is going to find out where the desk is located
     
  26. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Long answer:

    How well Polarith AI can perform in your scenario depends on your game logic and AI infrastructure. As long as you make sure to register your objects in an AIMEnvironment making them available to AI agents and to use the correct components of Polarith AI, such as behaviours for seeking/avoiding the targeted objects combined with the pathfinding components, then it should work for you.

    Note that for your scenario, you will need to manage the concrete object which should currently be targeted yourself. This can be achieved by some state machine which can be seen as the brain of your NPC. When the NPC's brain says "Now, it is time to go to this TV!", then you would need to provide the TV object to the perception pipeline of Polarith AI, making it visible to the AI agent. Then, our AI can utilize the inbuilt pathfinding and steering to navigate and go to the object in a natural way.


    Short answer:

    Yes, it works, but take into account that you need to master a learning curve for integrating our system into your application, especially when it comes to connecting actions of your NPC to our behaviours and, finally, to converting an AI decision to movement with the help of the inbuilt or your custom character controllers.


    You're always welcome. Let me know if you have any further questions.


    Martin from Polarith
     
  27. N00MKRAD

    N00MKRAD

    Joined:
    Dec 31, 2013
    Posts:
    210
    I haven't really figured out how to use it in 3D space.

    I'd like to transform the ClassicDeadlock scene into 3D, but the agent won't move.

    Or is it not yet ready for 3D?
     
  28. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Do you mean movement in 3D, but in a way that can be simplified to planar movement? (i. e. the same limitation as with nav meshes.) Or full 3D as you'd use in underwater or flying games?
     
    Polarith likes this.
  29. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    If you mean movement in 3D space with all degrees of freedom, then I feel sorry to tell you that full 3D context steering is such a beast of algorithm and sensor approach and is not included in the current version. It is in active development since August this year. We're very optimistic that we manage to release something cool at the very end of this year or the early beginning of the next year. Unfortunately, lifting up context steering from a planar to a spherical level is more mathematical-intense than we initially thought.


    Martin from Polarith
     
  30. N00MKRAD

    N00MKRAD

    Joined:
    Dec 31, 2013
    Posts:
    210
    Just X/Z with circular obstacles would be enough in my case.
     
  31. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    This should usually be no problem at all. Did you see our tutorials? However, if you cannot find a solution to your issue, we could provide you a small example scene with the deadlock scenario in 3D.


    Martin from Polarith
     
  32. indie_dev85

    indie_dev85

    Joined:
    Mar 7, 2014
    Posts:
    52
    Hi,

    Can you tell when newer version of Polarith(With Examples + youtube tutorials) will be available on the unity store.Also can you please integrate Polarith with A* Pathfinding plugin from Aron.

    Thanks
     
  33. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hey, @indie_dev85! The past has shown that it is difficult to give a reliable date. But according to the current status of our work, I think that we will upload the new package during the next week. As long as Polarith AI cannot finance our developers, we still have to accept delays depending on our other projects which earn us the money we need to be able to put our heart and soul into Polarith AI, unfortunately.

    The integration of A* from Aron has already been on our TODO list from the very beginning. That said, at the moment, most users wait for formations and 3D. When these features are live, then we can talk about an out-of-the-box A* integration.

    Fortunately, our API makes it relatively easy for you to integrate A* yourself. Therefore, you only need to inherit from AIMPathfinding and implement the interface appropriately. Of course, we would love to help you with this task. If you like, you can shoot us a mail, a private message or we can talk via Skype to find a decent solution together.


    Martin from Polarith
     
    one_one and MrJBRPG like this.
  34. indie_dev85

    indie_dev85

    Joined:
    Mar 7, 2014
    Posts:
    52
    Thanks for the info, will wait for next version release.
     
    Polarith likes this.
  35. unity_fAr6qcwiDY0pug

    unity_fAr6qcwiDY0pug

    Joined:
    Dec 2, 2017
    Posts:
    1
    Hi, is it possible to provide turn by turn directions and show a HUD arrow that shows the direction (turn left, turn right, or straight ahead) and distance to objective? For example, it's a taxi game, and you need to drive the shortest distance to the passenger. The game must give you turn by turn driving directions, draw a route along the calculated shortest path, and dynamically change the route + directional arrow. Is this possible using Polarith?

    Thank you
     
  36. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hey. Everything that you've mentioned is possible, but everything is not inbuilt. Especially the (HUD) visualization is not included at runtime. The context visualization that you can see within our videos is Editor only. As against the game logic that you describe is what Polarith AI is made for.

    Let me know if you have any further questions.


    Martin from Polarith
     
  37. unity_2LMxNW2cjGl2mA

    unity_2LMxNW2cjGl2mA

    Joined:
    Dec 5, 2017
    Posts:
    3
    Hello Martin,

    Thanks very much for the prompt response.

    I can understand that some of what I've asked is not built in. However, what I need is to the ability for the AI engine to give me the shortest path between two points (and any turns I need to make along the way). So a complete route so that we can visualise the HUD and any scene overlays (like glowing path lines) ourselves. Please note that the player will be driving the car and it's not really AI. We might use the AI for other cars on the road, but the main purpose is to get the (auto adjusted, based on the user ignoring the instructions and the route will be recalculated) guided route and we'll do the rendering ourselves. If the AI engine calculates the distance, that will be a big bonus.

    Thanks!
     
  38. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    That is precisely what Polarith AI is made for. So our plugin will work fine for you. Let me know if you need help to figure out the API parts you will need to visualize what you want. Welcome to our community!


    Martin from Polarith
     
  39. LootlabGames

    LootlabGames

    Joined:
    Nov 21, 2014
    Posts:
    343
    I am trying to use Polarith AI to steer both player and NPC agents so they don't collide with each other
    I feel like Polarith could do this without a problem, but after going through all your tutorials I still have no idea how to do it.
    I am using Unity NavMesh to set destination for both.
    All agents both NPC and player are all on the "Units" layer.
    Any help with this scenario is much appreciated.
     
  40. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Your scenario sounds quite manageable. Where exactly is the point you run into issues? Our system has got a learning curve to master, especially in the first days. I'm afraid that your description is too generic at the moment for providing you a comprehensive answer.

    If you like, you can write me a more detailed private message, or we can Skype so that you can show us your scenario via screen capture streaming.

    You're always welcome.


    Martin from Polarith
     
  41. LootlabGames

    LootlabGames

    Joined:
    Nov 21, 2014
    Posts:
    343
    Think a very simple 3D scene with a plane as the floor. One player controlled agent and two(or more) NPC (AI controlled) agents.
    I only want Polarith to handle avoidance(since Unity's built in avoidance is terrible).
    I use Behavior Designer for the AI decisions.
    I need to be able to tell the agent the position I want it to move towards(using NavMeshAgent).
    If there are other agents in the way i want it to steer around them(and not push through as it does now).
    Looking at the Sample controllers that you have it is not clear how to make that happen.
    Do you have an example where you can control the agent(3D point and click) but it avoids "dangers"?
     
    Last edited: Dec 17, 2017
  42. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Thank you. Now I think that I understand your problem.

    The NavMeshAgent from Unity is both an AI system and a controller so that it does everything at once: Getting a path, making a local avoidance decision and transforming the decision into actual movement. Thus, it is not possible to merely combine this component with our system as it is. If you have the Pro version, we provide a component that enables you to directly use Unity pathfinding results (coming from a NavMeshAgent managed by us) with Polarith AI and a custom controller (e.g., one of our example controllers). Then, the local avoidance is no problem: Just add AIMAvoid our use AIMSeek on a minimized objective as passive avoidance. For that scenario, even our very simple example controllers should be feasible.

    If you have the Free version, it is possible to implement behaviours like AIMFollowWaypoints and AIMUnityPathfinding yourself. Therefore, you have to grab the NavMesh path and pass the data into AIMFollow whenever you reach a waypoint.

    I hope that I could help you out.


    Martin from Polarith
     
  43. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Are you planning to make formations and 3D steering all part of Polarith Pro? I'm sure you've considered that already, but a modular approach might help to a) create a steadier stream of income and b) make it financially worthwhile to add features. Numerous asset devs (especially in the character controller department, but also for AI and terrain shading assets) are already going this route. It's generally accepted and seems to work well for them. Seeing how good Polarith is (in functionality, software design and usability), it'd be a shame if you guys had to stop development due to financial considerations.
     
    Zielscheibe and Polarith like this.
  44. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Thank you for your encouragement. The development will not be stopped in the foreseeable future. :) Concerning 3D: We promised to release 3D for the Pro version, and this is what we will do. As against concerning formations and more complex controllers (space, car, etc.), I can imagine that we decide do release separate packages for this stuff.


    Martin from Polarith
     
    one_one likes this.
  45. LootlabGames

    LootlabGames

    Joined:
    Nov 21, 2014
    Posts:
    343
    Martin,
    Thanks for the response.
    I setup a test scene following your explanation but the agent won't move.
    Right now I am just trying to get the agent to follow a path via the AimUnityPathfinding.
    I can see the path via the gizmos but the agent never goes anywhere.

    I have the following script attached to the agent.
    -Rigidbody
    -BoxCollider
    -AimUnityPathfinding
    -NavMeshAgent

    Here is the simple script i'm using:
    Code (CSharp):
    1. using Polarith.AI.Move;
    2. using UnityEngine;
    3.  
    4. public class PointClickMove : MonoBehaviour
    5. {
    6.     public AIMUnityPathfinding pathfinding;
    7.    
    8.     void Update()
    9.     {
    10.         Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
    11.         RaycastHit hit;
    12.  
    13.         if (Input.GetMouseButtonDown(0))
    14.         {
    15.             if (Physics.Raycast(ray, out hit, 100))
    16.             {
    17.                 if (hit.collider.CompareTag("Floor"))
    18.                 {
    19.                     pathfinding.Destination = hit.point;
    20.                 }
    21.             }
    22.         }
    23.     }
    24. }
    25.  
    I can send you the whole project if you would like.
     
    Polarith likes this.
  46. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hello Katasteel,

    I tested your code with one of our upcoming package scenes, and it worked perfectly. :) The agent stood still until I clicked on the floor, then the AIMUnityPathfinding got activated and returned a path. That path was then passed to AIMFollowWaypoints, and the agent moved accordingly.

    However, we still have to find out why it is not working in your case. The first question is, did you add NavMeshAgent manually? If so, try to remove it, AIMUnityPathfinding handles the NavMeshAgent. (From our point of view it is a problem, that we have to use the agent class to get a path.)

    Other reasons may be plenty. Especially the physics setup might lead to problems that I cannot think of without seeing your setup. So can you try explaining what happens or send me image or video material? I also suggest sending me this information via email (support@polarith.com), so we can solve the problem together. I will then post the solution here so others can learn from it.

    Franz from Polarith
     
  47. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Sounds good! Another benefit of separate packages is that users can choose which functionality they want to include - to avoid the asset getting too bloated.
    By the way, I've been thinking about a sort of 'profile system' - personally, I'd like to be able to easily switch between the different setups I've made, maybe even have smooth transitions. Sure, I could have different prefabs and enable/disable them, but that only allows for hard transitions. It also means that I either have all of them on my actors, bloating them up, or pool and reassign them. And the latter is already getting close to some sort of profile/prototype system. It's probably just a nice-to-have and not a must-have feature from your perspective, but is this something that's on your roadmap anyway?
     
    Polarith likes this.
  48. LootlabGames

    LootlabGames

    Joined:
    Nov 21, 2014
    Posts:
    343
    I kinda of have it working.
    The problem I am having now is that is starts moving even when no path is defined.

    I checked the AIMSimpleController update and it appears that Context.DecidedDirection.sqrMagnitude returns a 1 at the very start causing the agent to start moving without being told to.

    I have uploaded to dropbox and shared with support@polarith.com if you could take a look.

    Thanks for your help!
     
    Polarith likes this.
  49. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    This is the correct behaviour. You possibly look for the AIMContext.DecidedValues which are corresponding to the sampled interest respectively danger for the direction.

    Thank you, Franz will have a look at your example tomorrow.


    Martin from Polarith
     
  50. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    For this purpose, we use different kinds of external state machines at the moment. For example, Mecanim is capable of arranging 'profiles'/'AI states' in a graphical way including the state transitions. Because of such great tools which are already there, we prioritize the creation of some preset catalog (kind of wizard 2.0) before state handling.

    However, you're right. These tools would be a nice completion. Maybe one day, we'll have another plugin or add-on package for this purpose alone. Thank you for this great hint.

    Besides that, we plan something fascinating for the long-term future when 3D is released, and packages are fine, etc., which would help you partly with your issue. At the moment, I cannot speak as much about it as I want. Self-adapting parameters, agents which are able to learn... *duck away*


    Martin from Polarith