Search Unity

Polarith AI (Free/Pro) | Movement, Pathfinding, Steering

Discussion in 'Assets and Asset Store' started by Polarith, Apr 18, 2017.

  1. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    483
    :eek:

    Your plans sound absolutely solid and I do agree that most users would benefit more from a library of presets. I'll just look into implementing it myself then.
     
    Polarith likes this.
  2. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Maybe, we can talk about your experience in detail when you're done? Would love to do so.


    Martin from Polarith
     
  3. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    483
    Sure thing, it's probably going to be a couple of weeks still after the holidays until I'll get around to it, but I'll shoot you a message once I'm done.
     
    Polarith likes this.
  4. katasteel

    katasteel

    Joined:
    Nov 21, 2014
    Posts:
    178
    Has Franz had a chance to review the code I emailed?
    I haven't heard anything yet.

    Maybe Polarith is just not a good match for what I need?
    Looking for humanoid type movement, not vehicle.
    So unless I give it a destination I don't expect any movement to occur.
     
  5. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    483
    Polarith is about directional movement, based on information inherent to a position. Finding a destination is more in the realms of scoring logic that compares multiple destination candidates, something like utility AI. Polarith certainly can be and is used for humanoid movement, though. It's up to you how you implement your controller that puts the result of the AI into action.
     
    Polarith likes this.
  6. FirstAndTen

    FirstAndTen

    Joined:
    Oct 16, 2017
    Posts:
    16
    I've watched some tutorial videos and perused the manual this morning. I like what I see but am concerned about integrating it into my project.

    I'm writing a sports game, where there are 22 player agents in play at once. Each agent's movement is controlled through a state machine architecture, the steering behavior logic is in the FSMs. I'm looking for a library that I can call into and combine logic from the FSM's (example: Avoiding all players except the target, pursue the target) rather than writing the steering logic myself. I don't mind adding some components like the context component to the player prefab but I'd ideally like to be doing most of my use of Polarith AI through code.

    Is Polarith suited to this type of integration? My main concern I have is the # of objectives that it can handle at once. If there are 21 other agents in play, how would I get around the limitation of keeping it at 5 or less? Also, agents are generated at runtime, not design time so I can't preset them into layers in Unity.

    Thanks.
     
  7. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    A happy new year everyone! :D


    Polarith AI is a very good match for what you need. Franz should have answered you a week ago, as far as I know.


    Martin from Polarith
     
  8. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hello and welcome to our forum thread! Polarith AI should suit your scenario. Indeed, it can be used through code as you suggested. (Of course, we can help you with this, when your setups become more complex.)

    I think you have got something wrong concerning objectives: Objectives do not correlate to the number of obstacles, items, other agents or so forth. Instead, they denote a space or sensor part for gathering specific information of an agent's environment. For example, all things which should be considered as "obstacles" are sampled into one danger respectively obstacle objective. As against everything which is interesting for the agent goes into another objective regardless of the number of interesting objects.

    It works in a similar way a camera sensor works. Your camera is also able to sample multiple objects with one sensor at once. I hope this concept is clearer to you now.

    Let me know if you have any further questions. You are welcome.


    Martin from Polarith
     
  9. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hi Katasteel,

    sorry for the delay I just missed your message here over the holidays.

    I actually answered right away on 22 of December. Maybe the message was sent to your spam folder or something?
    If not, please email me again or sent me a pm here on the forum or something. Then we get this problem solved :)

    Franz from Polarith
     
  10. katasteel

    katasteel

    Joined:
    Nov 21, 2014
    Posts:
    178
    Yep, you were right it was sitting in my junk email.
    I have replied via email.
    Thanks again.
     
  11. guidoponzini

    guidoponzini

    Joined:
    Oct 4, 2015
    Posts:
    55
    I was oriented on A* Pathfinding page but then I saw your plugin, as I need to work with crowd and movement simulations and with behaviours of chasing, etc... I thought a lot, mostly as A* Pathfinding has a strong base of customers and I had in the past problems with amazing assets but then discontinued. Finally, I bought your Polarith AI because I love the idea and it looks great for optimization of huge mass of people. I will go through tutorials soon :) Keep working on it, it looks great :)
     
    Polarith likes this.
  12. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Welcome to our community and thank you! People like you enable us to work on what we love. A new patch arrives in the next two days, so we hope you'll like in what you have invested.

    Let us know if you ever have any questions.


    Martin from Polarith
     
  13. Cartoon-Mania

    Cartoon-Mania

    Joined:
    Mar 23, 2015
    Posts:
    300
    When will deterministic decision-making be updated?
     
  14. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hey, @Cartoon-Mania!

    Since v1.4, all of our core algorithms have the potential to work in a deterministic manner when you trim the results to four floating-point decimal places. For that to work, unfortunately, you cannot use the components as they are because they integrate themselves into Unity's update loop/mechanisms and these are your greatest enemy when it comes to determinism.

    You would have to update agents together with your deterministic client-side code. Therefore, you need to write an update manager which runs in sync with your, for example, lock-step system. For that, you can inherit from our class AIMContextEvaluation which automatically deactivates all agent updates (when in a scene) for updating them all in this derived component in sync with your lock-step system.


    Martin from Polarith
     
  15. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Shiny Packages



    Oh hey, didn’t see you there! We haven’t talked in quite some time, and there’s a ton of new stuff we should catch up on. First, a happy new year to everyone! We hope you had the merriest of holidays and a great new year’s eve. Second, we’ve been very busy: There is this neat patch we were working on, which includes shiny new packages for both Free and Pro users. Keen? Let’s go!






    Here’s the changelog for v1.6:

    Changes
    • Complete rework of the Polarith AI packages for both Pro and Free
    • New overall look of the examples using fresh public domain assets, models, sprites, textures, etc.
    • Packages now include proper Polarith image material to comply with the license
    Enhancements
    • [Pro] Added a 3D scene that illustrates different methods for improving the performance of scenes with many agents
    • Added laboratory scenes demonstrating the effects of components for both 2D and 3D scenarios
    • Added a 3D scene that shows a sophisticated example of vehicles moving in a roundabout using a state machine
    • Added a 3D scene where cars behave properly on a priority crossroad
    • Added a 3D scene where a character collects items in a forest
    • Added a 2D scene that demonstrates a boid using attraction, repulsion and alignment
    • Added a 2D scene with an example multiplayer space game
    • Added an example RootMotionController including source code
    • Added an example VehicleController including source code
    • Added several example scripts which are necessary for the new scenes
    • AIMContext: Added a public AddObjective method
    • AIMSeekNavMesh: Improved the whole concept of the behaviour, it now uses a more precise raycast method
    • Editor: Added default objects to the hierarchy within Unity’s context menu
    Fixes
    • AIMContext: Fixed a bug with the indicator gizmo that occurred when changing the sensor
    • AIMSteeringPerceiver: Fixed an issue that occurred when using both layers and game object lists, whereby the layer object percepts were overwritten
    • AIMSeekNavMesh: Fixed a problem where the behaviour did not work when the SelfObject in AIMContext was null
    • Documentation: Added a missing UnityUtils namespace documentation
    • Documentation: Corrected a wrong description of the AIMContext.DecidedDirection

    What’s Next?
    The next update will contain our long-awaited 3D feature: Imagine a spherical sensor that perceives objects in every possible or specified direction. With that, you can move entities like aircrafts, spaceships and hot-air balloons in three-dimensional space! But there’s more: With a brand new sensor, it was necessary to create a handful of new behaviours as well. Now, the lion’s share of the work is done, we just need a little more time to tweak some niceties. So hang on just a little more and, we promise, you don’t want to miss out the next patch.


    Martin from Polarith
     
    Korindian and one_one like this.
  16. alandang

    alandang

    Joined:
    Mar 10, 2015
    Posts:
    1
    Hi!

    I recently tried to incorporate Polarith AI into a test scene which I had contacted you not long ago about so I apologize for the short time between contacts. This morning I tried to create a new scene completely using an animated model of a cat and two boxes, one that it is interested in and the other is not. I tried my best to follow the settings to a T but the cat still ignores the boxes and simply wanders off. What am I doing wrong?

    https://we.tl/4whKPEI0aY
     
  17. potatojin

    potatojin

    Joined:
    Apr 11, 2012
    Posts:
    23
    Just purchased the Pro version and I'm very excited to get this integrated into my game!

    Any idea when/if the Suburb/Racer demo scenes (seen in this video:
    ) might be released? Looks like there's some cool techniques to learn from those that would come in handy for my project.

    Thanks for the hard work and great asset!
     
  18. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hey, alandang!

    We answered your question via mail 20 minutes ago since we didn't manage to have a look at your example until now. Sorry for the delay. Let me know if you still need support. :)


    Franz from Polarith
     
    Last edited: Jan 29, 2018
  19. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hi @potatojin,

    thanks for purchasing our product and for the feedback! :)

    These scenes didn't make it in our current release even though it was planned to release at least the racing scene. The inclusion of this scene would have delayed the release further, so for now, we decided to hold the scene back. We're also going to consider polishing the suburb scene and including it as well since you requested it.

    The general problem is that more scenes result in a lot of more maintenance effort than it might look. Especially that physics-based scenes because they always behave differently in different Unity versions. At least we've got feedback that for some users, the agents in the Lab scenes sometimes stop moving in certain Unity versions. So, we have to re-iterate these as well.

    If you have specific questions or just want to discuss approaches on how to design an AI for your scenes, you can, of course, write a mail to support@polarith.com or post them here. You're always welcome.


    Franz from Polarith
     
    Last edited: Jan 29, 2018
  20. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    YouTube Glory



    You know, between all the champagne in the clubs and the countless pool parties at the Polarith mansion, it does happen that we check our inbox. And let me tell you, it’s packed!

    There were plenty of messages that are best responded with a video on YouTube. We’ve got some tutorials coming up, like a 30-minute mini-series and a handful of quick tips showing essential concepts and the basic application of our AI in your awesome games. Also, we’ve purchased this neat new vocal preamp that should make our content producer’s voice somewhat bearable, which is nice. In addition to that, we are planning to show you the newest features face-to-face, which means that you will see us on camera in the future, so brace yourselves!

    How to Move an AI Character - Part 1
    In this tutorial video, we show you how our AI can work together with animated agents, how to achieve behaviours performing more naturally and how to solve problems by a deeper understanding of Polarith AI. Here’s the first part of our new tutorial series. Make sure to subscribe to our channel for being notified when we publish the next parts!




    Martin from Polarith
     
    one_one likes this.
  21. CurtisMcGill

    CurtisMcGill

    Joined:
    Aug 7, 2012
    Posts:
    66
    Hi Polarith

    I am a pro, not a pro but a Polarith Pro and I was wondering what package did you use to create the game on your website, IE, the marbles, shadow and white stuff coming out.
     
    Polarith likes this.
  22. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hey, @CurtisMcGill!

    Welcome to our community. What scenes do you mean? Do you mean the stuff which is showing up full-screen when you enter the http://polarith.com/ai/ page? Textures, (primitive) objects for ground, walls, etc. lighting and particles are all pretty much the standard stuff. We added a little bit of noise via normal mapping, used Unity's GI lighting and the UBER shader for the glass refraction.

    If you want to know anything else, let me know.


    Martin from Polarith
     
  23. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    Hello.

    I am currently using A* pathfinding but i am considering shifting to Polarith.

    Could you tell me what the difference between the two is and how Polarith work with a lot of 3d agents?
     
  24. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hey, @Censureret!

    Your question might look harmless but the answer can escalate very quickly, so I try to keep it as short as I can resist.

    Well, although the A* Project and Polarith AI target similar goals, namely moving characters around in a decent manner, the comparison of these two would be like a comparison of apples with pears.

    The A* Pathfinding Project finds a global solution to the problem of navigating a character to the desired position in space by following a calculated route. As against Polarith AI's (context) steering algorithms sample the local environment to make an optimal local decision by balancing the currently observable pros and cons. Of course, this local decision must not be the global optimum or best decision possible but, in some cases and under certain conditions, the results can be equal. From a performance point of view, finding local solutions is much cheaper than finding global optima, of course. So yes, Polarith AI can handle many 3D agents (having a planar AI environment until the next patch arrives). The Pro version offers a special Performance component for such purposes.

    That said, we are now reaching the interesting part. Since most of us want to make great games which are as immersive as possible, global optima are not the type of solutions which make your AI behave naturally. Because we, as humans, compare everything to ourselves and, in most cases, we do not find optimal global solutions in real-time at all. We try to be as good as we can. That's how we are, and that's how Polarith AI works. We made it work this way because we wanted to eliminate unlogical behaviour in modern AI approaches without the handling of countless special cases. And we came up with a highly parameterizable system which true strength is given by the power of combining atomic behaviours to more complex ones.

    Because Polarith AI's core algorithms steer your character, it does not primarily navigate it. Again, let's have a look at us humans: For driving our car to a specific target, we often require both a navigation system like Google Maps and the driver who considers the information of the navigation system and, combined with the local traffic situation, he turns these into actual steering actions for the car. (At the moment, it seems that we'll have autopilots for this task in the future. :))

    So, like in the real world, your AI would perform more naturally by combining the best of both worlds. That's why we provide a connector to Unity's pathfinding routines. We've abstracted everything quite well, which means that you can combine every pathfinding including the A* Pathfinding Project with our plugin by simply overwriting two classes and passing over the necessary data. This way, you can rely on A* for calculating paths from time to time (asynchronously) and use Polarith AI for following it, which is not only much cheaper regarding performance but more immersive as well. On top of simple pathfinding, it grants you access to the great world of implicit and emergent movement behaviour (think about boids, swarms, formations, and so forth).

    As can be seen in our issue tracker, we've already planned to provide an A* Pathfinding connector out-of-the-box in future releases.

    I hope that my answer did not get you bored too much. You asked one of these questions which trigger me, as a fan of this technique, a lot.


    Martin from Polarith
     
    Last edited: Feb 5, 2018
    Censureret and one_one like this.
  25. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    @Polarith

    First of all thank you so much for your response. Now I can actually see a lot of great points in what you wrote and what your documentation says.

    After viewing all of your tutorials and introductions to your algorithm one question stroke me:

    How would this work with my classic state machine ( Action / Decision)

    In your tutorials, you cover "objectives" as kinda the "need" and "refuse". (Objects in the world that you want to get and Objects you want to avoid)

    While this is very basic the game that I am building relies on a lot more advanced AI such as formations, melee and ranged combat and general RTS like controls.

    With that in mind, I got worried that I would be unable to set it as simple as shown in your tutorials (i know that they are there to show a proof of concept).

    So, I guess my question is how easy would It be to be able to implement such behaviors with the "Polarith Asset"?
     
  26. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    Okay so i just bought the pro version of the asset now I need to create a combination between A* pathfinding and Polarith. if you can help me in the right direction I will, of course, add all my code so that everyone can enjoy the implementation.
     
    Polarith likes this.
  27. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    That's great! Thank you so much for supporting our development.

    Alright, for integrating A* with Polarith AI, you need to derive from AIMPathfinding, implementing everything which is abstract in this base class. For an optimal implementation, your overridden methods should behave exactly as stated in the documentation. A good starting point for implementation are the pure abstract property
    points [get] and the abstract method CalculatePath(Vector3 destination). If implemented appropriately, I think that the rest of our (virtual) implementation should already work then, whereby it is important that the internal validators can do their job so that the system knows when to request a new path (re-)calculation.

    If this is done well, you'll end up with a new component you must have in your scene which can then directly be put into our AIMFollowWaypoints behaviour, which should already work then because the only condition of this behaviour is that the path is specified through discrete points. :)

    If you run into trouble, we would love to help you out with this since we're also very interested in an A* connector. Just shoot us a mail using support@polarith.com, or we do a little bit of Skype if necessary. In the end, if everything works well, we'll consider to integrate your solution into Polarith AI and put the sources into the package for learning purposes.


    Now to your earlier question: What you want to achieve can be done with Polarith AI, whereby especially (generalized) formations are not an easy task to master. That's why one of our developers is working on a small formation addon for Polarith AI which can be bought by any Pro or Free user at a very fair price when it's finished. :)


    Martin from Polarith
     
    one_one likes this.
  28. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    @Polarith

    Okay, il try and get it up and running I might add you on skype.

    After I've been using your asset for about a day now one thing came to mind. It seems that you asset allows the AI to make decisions based on how "badly" he wants to achieve an objective of finding an object.

    Now this works well if you wish to simulate a crowd or a car game where you have direct waypoints.

    However, i haven't found an example where you force an AI to go to a location one of my cases could be "Go to a building site and once you have reached it start building the house"

    Can you tell me if this is actually possible ?
     
  29. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    Okay a small update from me.

    I have actually managed to make it use the "Seeker" from Astar. it looks something like this so far:

    upload_2018-2-7_14-22-24.png

    With the following code:

    Code (CSharp):
    1. public class AIMAstarPathfinding : AIMPathfinding
    2. {
    3.     private Seeker _seeker;
    4.     private readonly List<Vector3> copiedPoints = new List<Vector3>();
    5.  
    6.     [Tooltip(
    7.         "This validator verifies the current path status. For example, the path is stale after changing the area mask and this might cause a re-calculation. ")]
    8.     [SerializeField] private AIMAstarValidator _astarValidator = new AIMAstarValidator();
    9.  
    10.  
    11.     /// <summary>
    12.     /// This validator verifies the current path status (read only).
    13.     /// </summary>
    14.     public AIMAstarValidator AstarValidator
    15.     {
    16.         get { return this._astarValidator; }
    17.     }
    18.  
    19.     /// <summary>
    20.     /// Returns the path points in global coordinates as a copy.
    21.     /// </summary>
    22.     protected override IList<Vector3> points
    23.     {
    24.         get { return (IList<Vector3>) this.copiedPoints; }
    25.     }
    26.  
    27.     private void Awake()
    28.     {
    29.         _seeker = GetComponent<Seeker>();
    30.     }
    31.  
    32.     private void Start()
    33.     {
    34.         base.Start();
    35.         _astarValidator._seeker = _seeker;
    36.         this.distanceValidator.PathPoints = (IList<Vector3>) this.copiedPoints;
    37.         this.validators.Add((Validator) this.AstarValidator);
    38.     }
    39.  
    40.     public override void CalculatePath(Vector3 destination)
    41.     {
    42.         _seeker.StartPath(transform.position, destination,_astarValidator.PathCallback);
    43.     }
    44. }
    And the following validator:

    Code (CSharp):
    1. public class AIMAstarValidator : Validator
    2. {
    3.     public Seeker _seeker;
    4.  
    5.     private Path lastPath;
    6.  
    7.     public void PathCallback(Path p)
    8.     {
    9.         lastPath = p;
    10.     }
    11.  
    12.     public override bool Validate()
    13.     {
    14.         return (lastPath != null && !lastPath.error);
    15.     }
    16. }
    And it actually sorts of work. the CalculatePath method is being called however the character only walks straight no matter what :( so there is a bug somewhere. I debugged the controller and it seems that the context direction is always (0,0,1) which I simply cannot understand why?

    If anyone knows something or can see what I've been missing please give me a shout!
     
  30. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    For the interested i can add the following image:
    upload_2018-2-7_14-48-33.png

    What you can see here:

    The green line is the path calculated by the seeker (A*)

    Sadly she just walks in a straight line (as shown by the gizmo)
     
  31. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Something like this is possible by using state machines like Unity's Mecanim for controlling and parameterizing a Polarith AI agent based on the current situation on-the-fly. The example scenes which contain the crossroads and roundabouts demonstrate how to utilize Unity's state machine for such purposes. In these scenes, we switch AI states continuously for forcing agents to do the right actions, thus, combining the right behaviours with the correct parameters.

    Concerning the objectives, you've already got it right. These are simply the "view" of the AI world regarding the position (and sometimes orientation) of friendly or bad things, like an image sampled by a camera sensor.


    Martin from Polarith
     
  32. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Thank you for your first try. Alright, I will have a close look at this tomorrow. Sadly, today I suffer from migraine and I'm currently unable to focus myself on these great stuff. But together, we'll find a decent solution as soon as possible. :)

    Having a very rough first look, it seems that you're on the right way to implement it. Maybe, you've missed a little detail. However, we'll see when I feel better.


    Martin from Polarith
     
  33. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    @Polarith

    Great, i look forward to our chat! By the way, just 5-star responses and support il rate your product as soon as I get home

    ALSO, your documentation is top notch! And trust me I have tried A LOT of assets with poor documentation so its nice to see that you think highly of your end user!
     
    Polarith likes this.
  34. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Thank you so much, a rating would be awesome! Looking forward to the further integration of A* together with you.


    Martin from Polarith
     
  35. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    **HUGE UPDATE**

    So i made it work and the first small version of A* pathfinding with Polarith. Here is an image:

    upload_2018-2-7_22-39-58.png
     
    Polarith likes this.
  36. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    214
    So now everything works even without the seeker.

    One thing I am still seeking is how I can use my coded state machine to manipulate the outcome. I have nearly read all the documentation and I can't find an example of how to know when the waypoint is reached so that I can reach it from outside code.

    @Polarith can you help?
     
  37. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189

    Awesome @Censureret! I'm verry happy that you managed to integrate A*. Now, we can utilize your experience to bring this joy to the rest of our community, of course, if you let us.

    You can query the currently targeted point in the AIMFollowWaypoints behaviour with its properties Target (which is the position of the current waypoint to be followed) and TargetIndex (which provides the index of the point corresponding to the given Points list). A waypoint is reached when the distance between the agent and the point is smaller than or equal to AIMFollowWaypoints' TargetRadius. The easiest way for checking if a waypoint was reached is to track the value of the TargetIndex, which is incremented (by the given StepSize) then.


    Martin from Polarith
     
  38. hagedor

    hagedor

    Joined:
    Feb 15, 2018
    Posts:
    2
    Hi,

    I've really been having a lot of fun fooling around with this plug-in, but I have a question about something I've been struggling to figure out on my own. Is there a way to extract the tag of a percept that is "sensed" by the agent? For example I want to know whether my agent senses an object of interest in order to enable other behaviors I have written. An example behavior behavior could be, if there is an object of interest my agent senses then its speed is 1, otherwise its speed is 0. How would I get that information?
     
  39. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189

    Hey, @hagedor! Welcome to our community, and let's get straight to your question.

    For an easy solution, you can use the Label property in the AIMSteeringTag. This information will also be available in the extracted percept belonging to your original object in the SteeringPercept's Label property. This property can be used by all behaviours working with percepts inheriting from PerceptBehaviour<SteeringPercept>.

    However, I'm not sure that you need this additional information. By using our perception pipeline, you automatically get the percepts belonging to a specific "interest" or "danger environment". That's why all of our inbuilt behaviours require the user to select an appropriate environment for obtaining the percepts when a Steering Perceiver is used within the scene.

    We're glad to have you hear. You're always welcome to ask further questions.


    Martin from Polarith
     
  40. hagedor

    hagedor

    Joined:
    Feb 15, 2018
    Posts:
    2
    Thanks for responding! I've been reading the documentation really closely and have messed around with these classes. Let me explain a little more.

    Question 1:
    Whether using Seek or Pursue there is a radius in which an object of interest will be considered for the steering behaviour. How can I access that information to know whether the agent has an object within that radius that it is considering and which object that is?

    Which leads to Question 2:
    While trying to figure this out I've noticed that the forward receptors always have been giving a base DecidedValue of 0.2 even when there are no objectives on the map. This causes the controller to move forward for no reason. Even Reduction and Arrive will only ever lower the DecidedValue to 0.2 and the agent never actually stops. How do I fix this without having to just manually adjust steeringBehavior.Speed?

    and finally Question 3:
    I just want my agent to sit still until an object of interest comes into range, then it will go get it, destroy the object of interest with a collision, and then sit still again. Reduction and Arrive have the property of one targetObject that they are focused on so I tried using whether that was null to manipulate the controller into doing what I want, but then it doesn't allow me to destroy the object because it would cause some sort of catastrophic loss of information about the percept. Any advice?

    I know this is a lot, but I've tried everything I could find in the documentation to find a simple solution and I'm out of ideas. Thanks for being so responsive to everyone! It's really great to see.

    EDIT: Oof I knew it was something simple I was overlooking. It's a bittersweet discovery that it was the Stabilization max increase of 0.2 that caused the agent to have values for the selected environment in the forward direction when there was nothing there.
     
    Last edited: Feb 25, 2018
  41. wicea

    wicea

    Joined:
    Jan 23, 2016
    Posts:
    4
    Hi! Could you explain me, how local avoidance is working?
    I loaded test scene "Avoid", changed radius of danger collider (or danger scale) and agent stopped steering. What I'm doing wrong or missed?

    upload_2018-2-27_17-37-29.png
     
  42. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189

    Hey @hagedor,

    sorry for the delayed answer. We're glad that you figured out the stabilization issue yourself.

    Concerning question 1: Context steering is an algorithm which samples the world into 1D functions (images made of the environment) we call objectives or context maps. During this process, the information of single objects is lost because, for a proper decision, we do not need them any longer. When objects are converted to percepts, behaviours figure out themselves which percept is relevant or not. Behaviours do not cache this information for not reducing the performance too much, but we already thought about caching the closest object per behaviour because this seems to be required most often. So, at the moment, what you want can only be achieved by deriving a new SteeringBehaviour or by doing the radius check on your own. However, I would recommend you anyways to have a spatial level structure seprate from Polarith AI, which allows you for fast spatial queries, e.g., spatial hashing.

    Concerning question 3: There are a lot of different ways of achieving what you want.
    • Option 1: Using the interest objective values as speed in the controller is what you want. If there is no interest sampled, the object will not move at all. For better control, I would advise you not to use our inbuilt example controllers but to write your own. For a good starting point, you can have a look at the sources of the controllers within the package.
    • Option 2: This can also be achieved by using AI states, e.g., by using Unity's Mecanim. For example, OnTriggerEnter you can deactivate the AI respectively specific behaviours. When the scene requires it again, switch the AI state back to the desired behaviour. The package contains multiple examples for doing similar things, like the roundabout or crossroad scenes which require agents to change behaviours continuously.
    • Option 3: Use another Seek and the advanced layer system of the behaviours. This Seek should be subtracted from the other objective magnitudes. For being executed after all other behaviours, it needs a higher Order then the rest of your steering behaviours.
    I hope this helps you out or gives you a good starting point.


    Martin from Polarith
     
  43. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189

    Hi @wicea,

    thanks for pointing out this one. It is indeed our fault, not yours. The lab scenes should work as described when hitting play. You shouldn't have to do anything.

    We already noticed this problem and fixed it on our develop branch. The problem lays in Unity's physics system. Sometimes the agent stops at the collider borders because it cannot overcome the friction. In bugfix patch 1.6.1, we'll change the colliders so that this cannot happen anymore.

    Interestingly, we did not observe this behaviour in Unity 5.3 which we need to use for developing the package for compatibility reasons. So, unfortunately, physics seems to behave differently between the Unity versions.

    Nevertheless, as a workaround, you can turn down the friction of the attached physics material. The reason why we did not manage to upload a hotfix yet is that we are in the middle of moving into a new office. If you had written minutes later, I would already have packed up my PC. :)

    Sorry for causing such trouble.


    Franz from Polarith
     
  44. mkgame

    mkgame

    Joined:
    Feb 24, 2014
    Posts:
    583
    Hi, I'm working on and RTS game (www.metadesc.com), where the units are moved by steering. At the time I have a grid based pathfinding. To be able to move different sized units I have to make more grid graphs to give them bigger collision testing range. Also for the Army-AI I have an own grid graph. But I have still trouble with local avoidance and group movement behavior and group arrival conditions. I just have some questions:

    1. Is your pathfinding solution good and fast enough for an RTS game? I need to move about max. 120 units at the same time. 30 for player and about 90 for 3 AI player. That would be enough. (For Desktop, I7, I5)

    2. I have different sized units, and it is known that bigger units must have a bigger collision check to get around corners. How is it solved in your pathfinding solution.

    3. Are multiple graphs possible? As I described, in my pathfinding solution I need more graphs because for the different sizes of units. I also need an Army-AI graph, which controls a group of units and have another view on what should be avoided and what not. Or do you have another solution for these issues?

    4. Do you have a good arrival solution for multiple units? Somehow the units must stop, even if they cannot go to the desired position (let it be a worst case, no formation is used, all have the same target position). This can be improved by having a minimal arrival distance, but units outside this distance must somehow also set to be arrived.

    Short answer or a reference to your documentation is enough.
     
  45. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Hi @mkgame,

    a very cool looking game you have there. :)

    One point I have to make clear from the beginning. Polarith AI is not a Pathfinding solution. It is a library for local decision making using the novel Context Steering approach. What we provide is an integration for Unitys Pathfinding and also an interface for the integration of any other pathfinding solution. For example, you might integrate Polarith AI into your existing grid-based Pathfinding, such that the agents follow the path while also considering local obstacles without recalculating the path if this is not desired.

    Under this premises:

    1. No problem when using a feasible Environment setup and load balancing provided by our AIMPerformance component.
    2. This depends on the deployed pathfinding solution, Unitys system has actually problems with that. Other than that our AI can handle priorities for different units well
    3. Also depends on the used Pathfinding solution
    4. Not a one-click solution since this is very game specific. Our agents can be stopped in different ways, either through an AIMArrive behaviour that reduces interest magnitude in the target's direction or directly via the controller independent of our AI. This makes it again possible to handle such things without any changes in the actual path.

    Regarding the group movement. Even though it is currently possible to add simple formation behaviours, they are not included in the package. We are working on such an addon that delivers a collection of formation behaviours and corresponding controllers.

    In conclusion: since you require a specific pathfinding solution Polarith AI might not be what you need. What Polarith AI can do is to extend the AI of your agents by local decision making such that they can act smarter in situations where the global pathfinding approach is not feasible or too expensive.

    I hope I could answer your questions. If you need further information feel free to ask. :)

    Franz from Polarith
     
  46. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    483
    Have you guys already looked into the ECS and jobs system? You've probably spent quite a lot of time on your custom performance optimization, but it'd be amazing to have this running in ECS once it's out of beta and its APIs have solidified. It seems like it'd be a perfect fit.
     
    Polarith likes this.
  47. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Thanks for the hint. We've already had a look at it but we did not take the time to go into detail yet. Good topic for next coding marathon. :)


    Martin from Polarith
     
    one_one likes this.
  48. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    483
    It's probably going to be a bit until the API is relatively stable and there's still a lot of features/Unity API integrations coming, but it may already be enough for the core functionality. Jobs is already going to be part of 2018.1 and from what I've seen, if you've 'jobified' the code you're likely more than halfway there already, with the ECS being more about scheduling the jobs and setting up the data flow. Plus, with the hybrid ECS/GameObject approach, the transition could be quite smooth as well, especially for those who want to stick with GameObjects.
     
    Polarith likes this.
  49. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    189
    Franz and I had a closer look at these cool stuff last weekend. We came to the conclusion that we'll definitely adapt the ECS/Jobs system. The change could take a while but I'm convinced that's absolutely worth it. Thank you for your ongoing involvement. We really appreciate you. :)


    Martin from Polarith
     
    one_one likes this.
  50. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    377
    Hey, just came across this asset.

    When is the next update with Spherical sensor due?

    cheers

    Nalin
     
    Polarith likes this.