Search Unity

Polarith AI (Free/Pro) | Movement, Pathfinding, Steering

Discussion in 'Assets and Asset Store' started by Polarith, Apr 18, 2017.

  1. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Dear @BCFEGAmes,

    I am glad that you ask, I have prepared something for you :cool:

    Code (CSharp):
    1. int k = 1; // k-neighborhood: 0 = self, 1 = direct neighbors, 2 = first and second degree neighbors (2 hops)
    2. List<int> neighbors = new List<int>();
    3. neighbors.Add(context.Context.Decision.Index);
    4.  
    5. for (int i = 0; i < k; i++)
    6. {
    7.     List<int> tmpNeighbors = new List<int>();
    8.     foreach (int nId in neighbors)
    9.     {
    10.         tmpNeighbors.Add(nId);
    11.         tmpNeighbors.AddRange(context.Context.Sensor.GetReceptor(nId).NeighbourIDs);
    12.     }
    13.     neighbors = tmpNeighbors;
    14. }
    15.  
    16. System.Console.WriteLine(sensor.Sensor.GetReceptor(neighbors[0]).Structure.Direction);
    17.  
    This shows you how to obtain the neighboring receptors of the decided receptor. Of course, you can use any other receptor as basis, span the neighborhood, and rotate the receptor directions into world space.
    To give you a visualization of the k-neighborhood, have a look at my masters thesis on page 32, figure 4.6.

    Edit: Note that there might be duplicates in the k-neighborhoods.

    Scientific greetings,
    Martin 'Zetti' Zettwitz from Polarith.
     
    Last edited: Jan 5, 2021
    BCFEGAmes likes this.
  2. Skorcho

    Skorcho

    Joined:
    Jul 1, 2013
    Posts:
    16
    Hi Martin,
    Thanks for your fast reply, If I understand this correctly, I could use this to implement "adaptive raycasting" if a receptor has found something, by extending on the neighbors, I now have access to other receptors that I could use (for example) for raycasting!

    Unfortunately I'm stuck at a lower level, in setting the types for the context and sensor variables, I went looking at the API, and created variables of type Context, and Sensor, expecting a serialiazed field to drag the relevant components into, but so luck so far, oh to have a team with a good coder to work with!!!

    Kindest regards,
    Sergio.

    PS your thesis has amazing visualizations, only glanced, but very cutting edge!
     
    Polarith likes this.
  3. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Skorcho,

    Well, this a blueprint for cones or half-spheres since you build a neighborhood next to a point (receptor) on an approximate sphere. It was not meant to work with the sensor itself, only to use the geometric structure. You may use this for adaptive raycasting, too, but it may be more efficient to use smaller (or bigger) distances between the rays (depending on the accuracy or size of objects you want to perceive) than the original sphere. Four rectangular aligned rays may be sufficient, too. Also, deep-adaptive-sampling is possible (recursion): where you have a larger distance between the initial ray (hit) and the first iteration of adaptive rays, and the second iteration of tighter aligned rays around the hits of the first iteration, and so on. Acceleration, as only casting additional rays towards the borders may be a good addition since you already know the object's centre.

    You can use FindComponent, but if your field is public and your class inherits from MonoBehaviour, it is serialized.

    Thanks :)

    Have a great day,
    Martin 'Zetti' Zettwitz from Polarith
     
    Skorcho likes this.
  4. Skorcho

    Skorcho

    Joined:
    Jul 1, 2013
    Posts:
    16
    My bad, I was creating the Variables of type Sensor & Context, instead of AIMSensor & AIMContext, it's working wonderfully now! Thanks.
     
  5. Skorcho

    Skorcho

    Joined:
    Jul 1, 2013
    Posts:
    16
    Hi Martin,
    Working wonderfully, have the raycasts working, and debug.DrawRay working nicely, with orientation linked to the heading of a Wandering agent by using

    transform.TranformDirection(sensor.Sensor.GetReceptor(neighbors[nId]).Structure.Direction);

    Can I ask, if it's easy, if there's a way of getting the receptor with larger magnitude? Something so that I can check for a receptor resulting from a specific behavior, and scan from there?
     
  6. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Unfortunately not, since every behaviour writes into the context map and there is no information about the underlying behaviours since this was never intended. The underlying MCO solver will extract the optimal (maximum) receptor automatically. But maybe you tell me your plan in detail. Feel free to write an email to support@polarith.com.

    Best wishes,
    Martin 'Zetti' Zettwitz from Polarith.
     
    Last edited: Jan 8, 2021
    BCFEGAmes likes this.
  7. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    I must say that once I put everything in place, after a bit of trial and error on length of raycast, value for K, and threshold between interest and danger, by using seek on a target, and seek on a conical array of raycasts based on your scripts, I got the most reliable terrain avoidance behaviour I've managed so far! Thanks for the prompt help, and clean solution!
     
    Polarith likes this.
  8. imump

    imump

    Joined:
    Jul 3, 2011
    Posts:
    55
    Any chance for integration with Game Creator?
     
  9. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    363
    Hey guys :) i looked at the asset many years ago and now that i have even more experience with AI i am considering getting back into it :) How well do you think it will scale with 100+ units?
     
  10. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    We do not plan official support for Game Creator. Since we are no game developers in the first place, we are not familiar with Game Creator. A quick research hints, that it uses the common Unity mechanics. Therefore, one should be able to use our system with Game Creator, since we simple tell the agent in which direction to move, i.e. we compute a movement direction. I see no problem in case Game Creator provides an extensive API where you can set the outcome of our AI in the Game Creator movement logic.

    You are welcome to try our free version, that already provides the full logic that is needed to test the integration.
    We would be happy if you share your experiences here with us :)

    Best wishes,
    Martin 'Zetti' Zettwitz from Polarith.
     
  11. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hey @Censureret,
    We already provide an example with 500 pedestrians in our package examples within the pro version. Feel free to play around and test the possible optimizations. Note that our raycast behaviours (especially 3D) may have a big impact on the performance. In most cases, such advanced behaviours are not needed for large scale applications since big groups of characters do not need perfectly reasonable behaviour for every agent, but for the overall experience. A focus on agents that are important or next to the player are a better point to concentrate on:)

    Best,
    Martin 'Zetti' Zettwitz from Polarith.
     
    Last edited: Feb 17, 2021
  12. Censureret

    Censureret

    Joined:
    Jan 3, 2017
    Posts:
    363
    Hello, Martin thank you for your answer.

    So I have created a Utility-based AI system and I want to give Polarith the test and I could use some help setting it up.

    The game I am creating is an RTS with a lot of units some friendly some enemies

    The goal of the game is to raid a castle, the enemy units in my scene will have to defend several capture points and also fight off enemy units I am currently using a decision system i would like to integrate with polarity ai so that I can control what he does and when.

    The examples you have seem very "pre" programmed following a path so i am wondering if it is possible to use polarity with the use-case I have?
     
  13. Bazzajunior

    Bazzajunior

    Joined:
    May 23, 2015
    Posts:
    20
    I am successfully using Polarith AI to power a number of prefab cars in my scene using layers in environments within the AIM Steering Perceiver which seek out the player's car (the game is a chase/hunt setup). Everything works fine with cars placed in the scene and the AIM Steering Perceiver dragged onto the prefab cars' AIM Steering Filter but now I want to instantiate prefab cars into the scene.

    I've tried creating a prefab of the AIM Steering Perceiver, which is identical to the one in the scene but when the prefab car is instantiated in the scene, the prefab AIM Steering Perceiver (note the spawned prefab car points to the prefab AIM Steering Perceiver - not the one already in the scene), it doesn't work.

    I've also tried reverting the AIM Steering Perceiver on the previously working prefab cars (reverting the drag and dropped perceiver to the prefab version) and whilst everything looks identical between the scene version and the prefab version, everything stops working when the prefab version of the AIM Steering Perceiver is enabled. I'm guessing this is something to do with the prefab version not being 'live' but I'm a bit baffled as to how to set up an instantiated prefab car to work.

    Should I be looking at a scripted version of the perceiver rather than using one loaded into the scene?


    Here's the scene with the AI Car only working with the AIM Steering Perceiver in the scene (note the AIM Steering Perceiver circled in blue was dragged into the scene).


    Here's the AIM Steering Perceiver in the scene...


    And here's the prefab AIM Steering Perceiver looking identical.
     
  14. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    This scene is only to demonstrate the techniques for performance optimization, the underlying behaviours work the same as usual. Otherwise, we would need to create a lot of examples to satisfy all possibilities :D
    For an RTS, you will need some fine-tuning to make a good look and feel for the groups. State machines might be a good addition to control the arrival and attacking of units instead of blocking themselves. Also, I recommend using as less behaviours as possible to improve the performance, and to have better control while fine-tuning.

    I would be happy to see your progress here.
    Good vibes,
    Martin 'Zetti' Zettwitz from Polarith.
     
  15. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Bazzajunior,

    I am not quite sure why you want to instantiate another AIMSteeringPerceiver. Usually, you create them offline since they are part of the level design and the overall logic. They are used to gather groups of objects. Individual objects can be attached additionally to the behaviour using the specific game objects list. Thus, you attach the same perceiver to each agent/NPC. Instantiation of perceivers on runtime is not intended.
    If you make changes during runtime, make sure to call AIMSteeringPerceiver.Update() and AIMEnvironment.UpdateLayerGameObjects().

    Hope I could clarify some things,
    Martin 'Zetti' Zettwitz from Polarith.
     
  16. Bazzajunior

    Bazzajunior

    Joined:
    May 23, 2015
    Posts:
    20
    Hi Martin, thanks for getting back to me on this. I've maybe been a bit unclear (quite hard to put this into words) but hopefully I can clarify further here. I'm looking to instantiate the prefab cars at multiple spawn points in the scene but have them all react to a single AIM Steering Perceiver (i.e. not one each).

    Previously, I've placed multiple prefab AI Cars into the scene (not instantiated from a script) and created a single perceiver with environments in the scene. When I drop in the prefab AI cars from the Prefab folder, I have to drag and drop the scene's perceiver into each cars' AIM Steering Filter.

    If I instantiate a prefab AI car from a script to a spawn point, the AIM Steering Filter would be empty as I can't reference a perceiver from a scene to the prefab model's filter. So, what I attempted was to use a prefab version of the perceiver and drag this into the prefab AI Car's filter (i.e. before it is instantiated in the scene, it has a perceiver from the prefab folder supplied), but sadly this doesn't work when the car appears in the scene.

    So would it just be a case of adding a script to the prefab AI Cars (removing the current steering filter?) which can then 'look' for the scene's perceiver when they are spawned? I'm not totally sure how I would do this :confused:

    Thanks for your time on this :)
     
    Last edited: Feb 18, 2021
  17. Bazzajunior

    Bazzajunior

    Joined:
    May 23, 2015
    Posts:
    20
    Ah, the answer to my problem was in front of my face this whole time :D

    I didn't notice that if I tagged my perceiver in the scene that I could then reference this in the AI Car prefab before it's instantiated. I found the answer by looking at the documentation for the Steering Filter

    I think I was trying to hard to set the Steering Perceiver from a convoluted script that had me going in circles :confused:

     
  18. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    @Bazzajunior

    Glad that you found the solution. Have fun setting up the AI, have a great Friday!
    Martin 'Zetti' Zettwitz from Polarith.
     
  19. jmacgill

    jmacgill

    Joined:
    Jun 17, 2014
    Posts:
    17
    I'm thinking it would be useful if there was a mapping function that allowed a constant value beyond max radius - perhaps a sigmoid function which when enabled evaluates to 1 at >= max radius (so yes, searing would have to be enables at beyond max radius when enabled)
     
  20. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @jmacgill,

    Actually, you can do this. Imagine, you have your desired Radius Steering Behaviour A. Add a second One, e.g. AIMSeek B. Now, set the inner radius of B to the outer radius of A, and the outer radius of B to whatever you want to expand A. Additionally, set the distance mapping of B to constant. Now, you have a constant value outside of A's maximum radius. Note, this way you will lose performance optimizations since the outer radius filters all objects that should not be perceived since they (too) are far away.

    Best greetings,
    Martin 'Zetti' Zettwitz from Polarith.
     
  21. jmacgill

    jmacgill

    Joined:
    Jun 17, 2014
    Posts:
    17
    Oh, yes I know it is possible. I've also written my own versions of some behaviors that do this by default. I just thought having some additional curves would be useful. Even in the above case it would be nice to have behaviours A,B,C with A being < min, B a sigmoid from min to max and C for beyond max.

    Might be too specific to my needs though.
     
  22. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    @jmacgill,
    I'll put it as a feature request on our GitHub repo :)

    Best wishes,
    Martin 'Zetti' Zettwitz from Polarith.
     
    BCFEGAmes and jmacgill like this.
  23. DwinTeimlon

    DwinTeimlon

    Joined:
    Feb 25, 2016
    Posts:
    300
    Hello @Polarith

    I am working on an RTS game that uses unity NavMesh for a few years now.
    You can find some info on the project here: https://store.steampowered.com/app/1368160/Heart_Of_Muriet/

    I have tweaked the Unity NavMesh system to its limits, implemented my own formation movement system, surround target system, have flying units working etc. Nonetheless, the avoidance system is my biggest pain and I am currently evaluating different options and might switch to a more solid solution at some point.

    Here are a few questions:

    - Is it planned to use DOTs for your navigation system at some point?
    - Is your system deterministic, as Unity's isn't yet (needed for lockstep RTS multiplayer)?
    - Do you support travel costs per agent as well as the Unity layer system for your navigation mesh generation?
    - Do you support realtime updating the navmesh + costs for dynamic objects/obstacles?
    - Is it required to break down navigation areas when they get big, what would be the limit?
    - Is it possible to calculate a complete path asynchronously not depending on an agent?
    - Does your system support surrounding targets like in Starcraft 2?
    - I have read in this forum thread that formations are planned, can you shortly outline the features?

    Thanks :)
     
  24. dashasalo

    dashasalo

    Joined:
    Oct 7, 2016
    Posts:
    11
    Hi @Polarith,

    Thoroughly enjoying your asset! One thing I can't seem to be able to figure out though...

    If I only use Wander with no other behaviours enabled my character goes into a spin. If I disable rotation on my RigidBody-based controller the character moves in the desired direction but of course without rotating towards it. If I set speed to 0 then the character rotates correctly but of course it doesn't move. As soon as both rotation and movement are enabled it spins.

    If I reduce rotation and movement speeds to minimum it looks like the Solution arrow from the Wander behaviour is rotating together with my character... So my controller is just following the inputs and rotating.

    I suspect the issue is my lack of understanding. The funny thing is that rotation works on its own as long as the character doesn't move. Should the Wander solution arrow rotate together with my object? Or should it continue pointing to the original direction while my character rotates?

    If anyone could point me in the right direction it would be greatly appreciated!

    Thank you!
     
  25. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @DwinTeimlon,

    There seem to be a little misunderstanding regarding the features of our asset. We do not provide a pathfinding solution or something similar. To clarify: pathfinding is a global solution with (godlike) knowledge about the area, such as a navigation system in cars. Our asset offers local steering, such as the individuals in nature would act: react to things you see. BUT: we provide support for pathfinding engines such as Unity's pathfinding.

    To your questions:
    - Is it planned to use DOTs for your navigation system at some point?
    As I said, we do not navigate in the classical sense. Unfortunately, we started developing this asset with Unity 4.8, so there is a gap between our system core and the features that Unity implemented in the past years. We may try implementing JIT for the vector computations at some point, but we can't promise since our asset is feature complete and time is rare. There will definitely not be full DOTs support.
    - Is your system deterministic, as Unity's isn't yet (needed for lockstep RTS multiplayer)?
    There are some hacks, but there is this general problem with floating-point numbers and determinism. Though it is possible, it is not beautiful. You can read more in our manual.
    - Do you support travel costs per agent as well as the Unity layer system for your navigation mesh generation?
    We work on top of Unity's NavMesh.
    - Do you support realtime updating the navmesh + costs for dynamic objects/obstacles?
    See above. With every timestep, our NavMesh behaviours update on the current NavMesh. Other obstacles that are on top of the NavMesh are perceived as usual.
    - Is it required to break down navigation areas when they get big, what would be the limit?
    See above.
    - Is it possible to calculate a complete path asynchronously not depending on an agent?
    If the linked pathfinding solution includes this feature, yes.
    - Does your system support surrounding targets like in Starcraft 2?
    Yes.
    - I have read in this forum thread that formations are planned, can you shortly outline the features?
    We have no release date, but we got something in the pipeline.

    Have a great day,
    Martin 'Zetti' Zettwitz from Polarith.
     
    BCFEGAmes likes this.
  26. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @dashasalo,
    This behaviour is intended. With AIMWander, the agent will obtain a random angle for the movement rather than a (what you expected) direction. Fortunately, it is relatively easy to implement a custom version of AIMWander that fit your needs.

    Have a fresh start into the week,
    Martin 'Zetti' Zettwitz from Polarith.
     
    dashasalo likes this.
  27. dashasalo

    dashasalo

    Joined:
    Oct 7, 2016
    Posts:
    11
    Thank you - this makes a lot of sense! Any way to get the source code for the original Wander behaviour?
     
  28. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    I've sent you a PM with some hints, yesterday ;)

    Happy coding,
    Martin 'Zetti' Zettwitz from Polarith.
     
    dashasalo likes this.
  29. DwinTeimlon

    DwinTeimlon

    Joined:
    Feb 25, 2016
    Posts:
    300
    Thanks a lot for the fast reply and for clarifying all my questions! Much appreciated.
     
  30. Davidbillmanoy

    Davidbillmanoy

    Joined:
    Jul 7, 2014
    Posts:
    120
    Hello! I'm working on a racing game powered by NWH 2, a car physics engine. I am stuck, because the AI Car doesn't do anything right now. Anyone can help me with this?
     
  31. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Davidbillmanoy,

    I am not familiar with NWH 2, thus I can't give you detailed instructions. If you have not already done, please have a look at our tutorials on YouTube and in our Manual. If done, the best way is visual debugging using the AIMContextIndicator to check if the agent receives the environment.

    Have great day,
    Martin 'Zetti' Zettwitz from Polarith.
     
  32. zoltanBorbas

    zoltanBorbas

    Joined:
    Nov 12, 2016
    Posts:
    83
    Hi Everyone,

    I am new to Polarith, just started to look at the examples of the pro version and I was wondering how comes that I cannot make the arrive behaviour work with the physics controller 2d? I am confused since it seam to have an effect on the AI but not the desired one.

    Any guidance would be very much apricated.

    P.s.: Here is my game I am working on, and try to see if I can get better results with Polarith then A* Pathfinding Projects local avoidance. https://imagined-reality.itch.io/stellar-sovereigns

    Thanks in advance!
     
  33. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @
    The Arrive behaviour is compatible with physics controllers. Note that our behaviours only compute a direction and a magnitude - it is up to the controller (and hence most of the time up to you) to make this into movement.
    However, using our example controllers, you need a rigid body and set the controller to Objective As Speed (interest).

    upload_2021-5-10_14-57-43.png

    Have a great start into the new week,
    Martin 'Zetti' Zettwitz from Polarith.
     
  34. airoll

    airoll

    Joined:
    Jan 12, 2021
    Posts:
    37
    Hello, i have recently purchased Polarith Pro. I have a question about 3D applications with Steering Filter. I noticed that Steering Filter has a parameter called Range, which is defined as: Percepts within this range are made available to the behaviours by, whereby all values smaller than 0 correspond to infinity.

    https://imgur.com/a/MaDQI3H


    When I try to turn on the Range Gizmo for the Copter example, I noticed that the gizmo only shows a single green 2D circle, which leads me ask: does Range look in 3D, or does it only look in 2D? If it looks in 3D, does it look in a sphere?
     
  35. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @airoll,

    Welcome to the Polarith family :) Thank you for your feedback. The range filter works in 3D as well since we check the euclidean distance to the target in Unity which uses 3D vectors. You can easily test this by moving a target around the sensor, best within the Context Indicator scene in the folder Lab3D. So yes, it would look like a sphere. We put that on our issues.

    Have a great start into the week,
    Martin 'Zetti' Zettwitz from Polarith.
     
  36. airoll

    airoll

    Joined:
    Jan 12, 2021
    Posts:
    37
    Hello, I am trying to understand the following:

    1. What is the difference between having an Avoid Bounds behavior associated with an Interest objective vs. having a Seek Bounds behavior associated with a Danger objective (as set up in the CopterHall demo example). From what I can gather, the only difference is that the Danger objective has a constraint, and therefore will reject an objective value if it is above that constraint. Is that correct?

      Said another way, is there any difference between a Seek Bounds behavior associated with a Danger objective w/ a constraint and an Avoid Bounds behavior associated with a 3rd Avoid Environment objective (max objective instead of min objective) w/ a constraint?

    2. Also, is there any purpose to secondary objectives w/ constraints, except for the fact that objectives whose value violate the constraints are rejected? The objective values are not used otherwise to inform which receptor to choose (only the primary objective is used for that), correct?

    3. How is Max Prediction Time used in Evade? Does it calculate the the position of a percept at each timestep until Max Prediction Time and try to avoid all those positions?
     
    Last edited: May 25, 2021
  37. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    I guess Martin would have a more thorough explanation, but in one of the forums posts there's a link to "context steering" original paper. The danger objective effectively constraints the interest objective, so any percept value on danger above threshold, blocks interest in that direction for that percept, they're processed by the solver, with good solution up to 5 objectives. I personally had a seek interest, constrained by static objects avoidance on danger_static, and dynamic entities avoidance on danger_dynamic, it's all in the docs really...
     
    Polarith likes this.
  38. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @airoll!

    1. They are similar but not the same. While Avoid Bounds actively says where to go, SeekBounds(danger) says where not to go. This may be necessary depending on your further logic. Additionally, AvoidBounds comes with more parameters to configure how to avoid. The main difference is, that AvoidBounds uses capsules to compute the tangent for its movement. Hence, there are no abrupt changes. Note, there is no need to add a 3rd objective since you can only maximize a single objective.
    Regarding the constraint: this is correct. All solutions above a certain danger threshold are rejected.

    Thank you @BCFEGAmes. Currently, we prepare some papers on our own that we can reference in the future :)
    I think you mean this part of the docs;)

    2. All objectives except the one to maximize are used for thresholding with epsilon. In most cases, two objectives will fit the needs. A third objective may be necessary for some rare special cases (we experimented with some boids and alignment). Note, the more objectives the more unstable the results become since the curse of dimensionality.

    3. No. What you describe is some kind of so-called 'temporal difference learning'. There, for each timestep, until x timesteps are reached, each non-static object is integrated for a single timestep and new actions are computed (which may be manifold). This becomes extremely complex and of course, is not possible for our AI since we do not know the logic of other objects, and our AI is for real-time scenarios with multi-agents.
    What we do is projecting the position by using the velocity and the prediction time, and evaluating the environment on this position.

    Both of you, have a great Tuesday!
    Martin 'Zetti' Zettwitz from Polarith.
     
    Last edited: May 26, 2021
    BCFEGAmes likes this.
  39. EntangledGames

    EntangledGames

    Joined:
    Mar 8, 2013
    Posts:
    9
    How can I implement NavMesh Links with AIMUnityPathfinding and AIMFollowWaypoints? it seems that the agent is just teleporting across the NavMesh Link, even if "AutoTraverseOffMeshLink" is set to false in the agent.

    Any idea if this is being caused by the polarith behaviours?
     
    Last edited: Jun 2, 2021
  40. airoll

    airoll

    Joined:
    Jan 12, 2021
    Posts:
    37
    Hi, is there a way to access the values that a behavior is going to write to an objective from the behavior itself? To describe my use case, I would like to use the output objective values of the Avoid Bounds behavior as an input to another system (a neural network) in the form of a float array.

    Is that written into the objective field? If so, would I just sub-class Avoid Bounds and create a public property that exposes the field? Are the values pointed to by the objective field overriden by subsequent behaviors? I am looking for only the output objective values of the behavior itself and no other behaviors that might also be active.

    EDIT: Also, as a separate question, if I have a kinematic rigidbody attached to my object that I want to detect in the perception pipeline, I should add a Steering Tag and enable Track Velocity, correct? If so, how does the steering tag compute the velocity?
     
    Last edited: Jun 3, 2021
  41. airoll

    airoll

    Joined:
    Jan 12, 2021
    Posts:
    37
    Hi @Polarith I am having an issue with static environments..

    I have set up my scene to use both an Interest and Danger objective, using the following configuration.

    ContextSetup.PNG

    I created a Seek Bounds behavior, whose Target Objective is set to Danger.

    SeekBounds.PNG

    However, when I run my scene with a context indicator for Danger, the receptor gizmos appear, then they disappear as if the danger objective was not running anymore. This is reinforced by the fact that when the object moves using DecidedDirection, it runs through environment objects. Here's a video to demonstrate.

    DangerObjectiveDisappearing — Kapwing

    After a lot of debugging, I think this is related to the fact that my Environment environment, which is the Filtered Environment for the Seek Bounds behavior, is set to static. If I set it to not static, then the Danger receptor gizmos continue to appear. However, obviously I would like my environment to be static for performance reasons. Any suggestions for how I might be able to fix this issue?
     
  42. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @EntangledGames,

    Big sorry for the long delay! Once again, Unity did not send us an email to inform us about new posts.

    You need to distinguish between our behaviours and movement itself. Our behaviours do not perform any movement but the controller does. We do provide some example controllers that will fit some simple needs but most of the time you'll need to write your own controller.
    AIMUnitypathfinding is only an interface that can communicate with Unity's Navmesh to provide waypoints. AIMFollowWaypoints simply traverse the points that are provided by Unity (or any other solution if you write a connector) and adds (interest) values towards the current waypoint.

    So, all in all, you'll need to check the controller since it performs the movement. Maybe someone else in the forum can help you with this topic.

    Have a great day,
    Martin 'Zetti' Zettwitz from Polarith.
     
  43. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @airoll,

    We are sorry for the delay, too.
    1. The objective field (IList) represents the objectives (interest/danger), not the objective values. The values you are looking for are written into the context map. Unfortunately, you can't access these values before they are written into the context map. Since the context map is a collection of values from all behaviours, you can't distinguish without special logic. To overcome this limitation, you may want to clone the game object containing the polarith AI components and use only the behaviour you want to examine. Then, you can access Context.Problem.

    2. As written in our doc: "Some behaviours need to know the velocity of a perceived game object (percept). So, there are two ways of obtaining this information. First, if a Rigidbody is attached to an object, the velocity is obtained automatically. Second, by attaching a SteeringTag and activating TrackVelocity, the velocity is computed via the perception pipeline. If both conditions are met, the velocity of the SteeringTag is always preferred." The tag simply computes the movement vector by differences between the time steps.

    3. We need more information, especially about the setup of your environment. There a no known issues with static environments if the objects are known to the environment in advance. Otherwise, the environments need to be updated manually.

    Best wishes,
    Martin 'Zetti' Zettwitz
     
  44. airoll

    airoll

    Joined:
    Jan 12, 2021
    Posts:
    37
    Hi @Polarith!

    Here are a couple screenshots that show how the environment is set up. I have my environment objects nested under a game object called Environment (see object hierarchy on left). Each environment object is set to the layer "Environment" (see inspector on right).

    EnvironmentHierarchy.PNG EnvironmentObject.PNG

    Then I have an AIM Environment component which refers to the layer "Environment".

    Environment Environment.PNG

    This is how my SteeringPerceiver is set up. Environment is one of the perceived environments.

    SteeringPerceiver.PNG

    In my Steering AI game object, I have my Context, SteeringFilter, and a Seek Bounds behavior whose objective is set to Danger, and is set to perceive the environment "Environment".

    SeekBounds.PNG

    When I set my Environment "Environment" to Static = True, then the context indicator disappears a few seconds after running the scene, as in the video I uploaded. I did some additional debugging and verified that the objective problem values were set to zero, which is consistent with the context indicator not showing the indicator anymore. However, when I set Static = False, it works fine.

    The other thing that I'll add is if I rapidly press Perceive Static (Runtime Only) on my Steering Perceiver object over and over after the scene is running when Environment "Environment" Static = True, sometimes for a few frames I will see the context indicator appear again for a frame, and then disappear. I don't know if it matters, but there are over 600 objects in my Environment layer. I also tried manually setting just 20 objects from my environment via the GameObjects list in the AIMEnvironment component, but that didn't work either when Static = True.

    I am using Unity 2020.3.10f1. I'll post some additional screenshots of my config in a follow up post because I can only upload 5 per post.
     
    Last edited: Jun 11, 2021
  45. airoll

    airoll

    Joined:
    Jan 12, 2021
    Posts:
    37
    For additional details, this is how my Seek Bounds was additionally setup. SeekBounds2.PNG

    This was my Performance component. I tested with and without this component (disabling the Threaded property of the AIMContext component), and it didn't change anything.

    Performance.PNG

    This is how my Steering AI (which has AIMContext, SteeringFilter, and my behaviors, etc.) is setup. The self object was set to the Character AI (which contains the AIMSteeringTag), which is the parent object of Steering AI.

    ObjectHierarchy.PNG

    The screenshot below is of the ContextIndicator.
    ContextIndicator.PNG

    If it helps, I'm happy to send an email with a copy of my project for additional debugging. I spent a few hours trying to sort through it but I have no idea - it's hard to reverse engineer what's going on without the source code haha.
     

    Attached Files:

  46. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    I think this would be the best option. If possible, provide a minimal example scene, otherwise, I'll have a look at the full scene. Please mail to support@polarith.com

    Best,
    Martin 'Zetti' Zettwitz from Polarith.
     
  47. Owlzy

    Owlzy

    Joined:
    Jul 7, 2014
    Posts:
    3
    Hi There,

    I'm having some issues getting Arrive behaviour to work in a suitable way for our needs. Is there a comprehensive arrive example I can reference?

    Currently they gently flip back and for direction once they arrive at their target, its somewhat alleviated by the AIMArrive behaviour, but they still don't stop completely still on arrival. I feel like there's some threshold value I need to use but can't see this in the documentation.

    Or is this something that should be handled in the controller? I saw this section in the documentation for the arrive behaviour, but I don't quite understand it

    "for instance by scaling the velocity of the agent with the objective value of the made Decision."

    Thanks,
     
    Last edited: Jun 21, 2021
  48. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Owlzy,

    You answered your question yourself. You need to handle this in the controller since it is mandatory to use (as in our example controllers) objective as speed. Hence, the translation is scaled to the magnitude and you can overcome overshooting. Additionally, there is an example in our package.

    Happy to help,
    Martin 'Zetti' Zettwitz from Polarith.
     
  49. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    In my opinion, Polarith is designed and inspired by vehicle steering, if you go through the forums you'll see some discussion on flocking and the up and coming formations. It would be totally possible to design your zombie locomotion system, and use Polarith steering to control many units on a nav mesh, but not in a plug and play fashion, it will take a lot of experiments, and calibration of parameters. Do spend some time working with the free version without navmesh, look at all the examples, and make an informed decision. It's an extremely powerful tool, and very reasonably priced for the pro version, but it is also very complex, and has a lot of parameters to deal with.
     
    Polarith likes this.
  50. Artini

    Artini

    Joined:
    Jan 29, 2016
    Posts:
    181
    Very interesting package and it looks like it needs extensive studying time, to get desired results.
    I am a solo developer, so I am always searching for shortcuts, if possible.
    I have just purchased it and would like to create something useful for my future project.
    Could you please guide me in the right direction, if possible.
    I need some intro scene, where different animals are walking and flying around the player
    moving around on the terrain. I am only looking for good and believable visuals.
    Is it something included with Polarith AI Pro, that will help me to achieve such goal?
    I am specifically looking for easily managing different animals animations.
    I am not even sure, if Polarith can help with such task at all.
    Please share your opinions.