Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Polarith AI (Free/Pro) | Movement, Pathfinding, Steering

Discussion in 'Assets and Asset Store' started by Polarith, Apr 18, 2017.

  1. christougher

    christougher

    Joined:
    Mar 6, 2015
    Posts:
    558
    Thanks for the reply! I had already tried to add GameObjects directly with: environment.Gameobjects.Add(thisGO); but I got a null reference (to the list itself I mistakenly believed, making me think I couldn't edit the list directly). My mistake though, nooby c# skillz lol. Got it working! Thx!
     
  2. Eck0

    Eck0

    Joined:
    Jun 6, 2017
    Posts:
    48
    Hello, I am very interested in buying your product, but I need to clarify some doubts:
    1- Do you have avoidance between mobile agents (npcs, players, etc)?
    2- the calculations are very expensive? since they are made on the server
    I am looking to make the same movement as the game Dota2 (it has avoidance between agents) if it has the avoidance between mobile agents, I will buy it, thanks.
     
  3. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Eck0,

    thanks for your interest :)

    Yes, we can! This is one of the core features of our product. We compute the position and possible movement directions dynamically on runtime. Furthermore, you can change behaviours, their parameters, the sensor and the perceived environment (objects) on runtime.

    This depends on the setup, but in general, it should be no problem. We use local visibility (radius) to limit the solution space (i.e. compute only stuff that is next to the agent). Furthermore, the Pro version features load balancing/threading and other techniques that will significantly boost your performance. As shown in our feature trailer, there are many ways for optimization.

    Good news: we offer a free version and a quick start tutorial on our YouTube channel. You can create a simple setup to estimate your future effort and test the core features like local avoidance and different visibilities/radii. Additionally, we provide a demo scene inside the package containing a client-server based game with simple avoidance between enemies and obstacles.

    For your final game, you would need the Pro version for path following (lanes), possibly advanced behaviours to compete with bounds and most important performance optimization.


    Have a wonderful Christmas and happy holidays,

    Martin 'Zetti' Zettwitz from Polarith
     
  4. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    Hi Martin

    Are there any plans to add Node Canvas support?

    cheers

    Nalin
     
  5. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Not yet. Sorry.


    But rumors said that we finished merging all the 3D stuff before the new year. Just need to finish the example scenes for you, and then, you'll finally get the long awaited update. :eek:


    Martin Kirst from Polarith
     
    Korindian, christougher and one_one like this.
  6. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    Thanks Martin - good luck with the Merge!
     
    Polarith likes this.
  7. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    Hi @Polarith,

    I purchased Polarith Pro. I don't understand the difference between Layer Normalization (Everything) in behaviour with normalize checkbox in the AIMContext. Help me, please.
    Thanks,
     
  8. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Kirshor

    and thanks for your interest in our product.

    I can understand the confusion since both functions are similar yet different. The normalization option used by AIMContext takes the final result of all behaviours and normalizes.

    The "Everything" option of LayerNormalization is applied directly after a behaviour, this can become handy when chaining multiple behaviours together. For example, three behaviours writing on the same objective, behaviour 1 has no normalization and writes some values, they remain as they are after the behaviour is finished. behaviour 2 now uses LayerNormalization Everything. Thus, in the first step it executes the behaviour as it is configurated, all values are written to the objective with respect to the results of behaviour 1. Now as soon as behaviour 2 is finished, everything up to this point is normalized. Now the third behaviour does not care anymore and may break the normalization state yet again.

    Personally, I used these features to experiment with different swarming configurations.

    Franz Pieper from Polarith
     
  9. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    Hi @Polarith,
    Thank you for your quick support.
    I have another question:
    In the scene SeekNavMesh, I tested with a narrow path by baking navmesh with a new cude, then the agent could not move through this narrow path as normal, it's stuck or move out of the path. View the attached picture please. Screen Shot 2019-01-04 at 4.58.47 PM.png
     
  10. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Assuming that the agent configuration was not changed, this is behaviour I would expect.

    The reason is simple, the AIMSeekNavMesh feelers sample the NavMesh near the agent and apply the corresponding magnitude to the danger objective, which is a minimized objective using a constraint. Such a narrow scenario leads to a high danger value, which spreads on the sensor like it is described here. This leads to danger values which violate the constraint of the objective and the agent ignores these directions in its decision.

    In other words, the constraint value could be seen as a scale how brave the agent is. In your case, it is not brave enough to move through the narrow passage. Tuning the agent's parameters leads to a fitting solution. I also want to point out, that in a real game, an agent might face a lot of different situation which makes it very hard or even impossible to find a fitting parameterization for all possibilities. The solution to this problem is AI States, which can be seen in our Roundabout example.

    I recommend experimenting with all the parameters and with different combinations since using context steering can be a bit confusing at first. For example, the fact that an AIMSeek can be used to avoid obstacles is indeed not obvious. I also want to point out that we have a lot of video material on Youtube. If you still have further questions feel free to ask them here. :)

    Franz Pieper from Polarith
     
  11. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    Hi @Polarith,
    Please tell me the difference between AIMFollow and AIMSeek.
    It seems that i can replace 2 component each other without difference.
    Thanks,
     
  12. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi again :)

    As the component reference to AIMFollow states: "The moment might come where an agent needs to focus on a single target. For instance, it needs to follow one waypoint after another, such waypoints might belong to a path (possibly found by an external global decision maker)."

    AIMSeek, on the other hand, can have multiple targets at once and is a RadiusSteeringBehaviour.

    Franz Pieper from Polarith
     
  13. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    I got it, thank you very much. :)
     
  14. guidoponzini

    guidoponzini

    Joined:
    Oct 4, 2015
    Posts:
    55
    How is going compatibility with 2018.3? I need to implement it in a project and would like to know if it's tested.... As last deploy was quite time ago, the package it's still supported actively right? (From forum seems to be)
     
  15. DiscoFever

    DiscoFever

    Joined:
    Nov 16, 2014
    Posts:
    286
    That’s a GOOD rumor ! Hope it is true :)
     
  16. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @guidoponzini,

    the package is still supported and compatible. However, some scripts in our dll and example scenes are marked as obsolete by Unity. We update the compatibility for the dll with the next update. The obsolete scripts in the package are used for the network examples and have no priority currently.
    Like Martin stated:
    we plan to release the new update together with some bugfixes in the near future.

    Have a fresh start into the new week,
    Martin 'Zetti' Zettwitz from Polarith
     
  17. Eck0

    Eck0

    Joined:
    Jun 6, 2017
    Posts:
    48


    hello, I bought your product, but I can not get my needs, I created a scene based on the existing examples, in the video you can see how the agents step on other agents, I do not want this to happen, I need the agents not step on other agents and when they reach the nearest point of the destination they stop.
    I am doing something wrong? or is it not possible to do this?
     
  18. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Eck0

    it seems pretty much the agents don't percept each other. In general, I recommend to turn on the AIMContext gizmo for visual debugging.

    In your example video, you neither show the AI behaviours' second tab(globe) for perception, nor you haven't set up an AIMSteeringPerceiver. For each agent's behaviour (like AIMSeek) you must either set up all GameObjects that should be perceived by this behaviour, or you need to set up environments using AIMPerceiver, as we have shown in https://www.youtube.com/watch?v=vSfYcOoKb7U.

    A first and simple approach to separating the agents' crowd is to have AIMFlee were each Agent perceives the others. Don't worry agents ignore themselves when they are also in the environment. AIMFlee needs to be applied on the interest objective, a small radius and a higher MagnitudeMultiplier than the other Behaviours. Such, the agents should repel each other if they get too close.

    I hope this will help you to get started.

    Have a great Tuesday,
    Martin 'Zetti' Zettwitz from Polarith
     
  19. Eck0

    Eck0

    Joined:
    Jun 6, 2017
    Posts:
    48

    Hello again, I have seen the video you mentioned, after setting everything up, I still do not have what I want, I need to stop the agents at the point closest to the target. Here I attach a video.
    I think the polarit should have a scene already created with this behavior, since it is used a lot ...
    Sorry for the inconvenience, but it is essential for me to be able to continue with my project.
     
  20. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi again Ecko0,

    there are two essential approaches to stop agents. First one is to handle the velocity based on the decided values, to decrease them near the target you can use AIMArrive. Our example controllers have a parameter that handles velocity based on the decided values of a specific objective. There is an example scene in Lab2D.

    The second approach is to handle velocity on the controller side. It is important to understand the concept behind Polarith AI. The "only" task AIMContext does is to calculate the best possible movement direction based on your environment and the sensor. What you do with this information is up to you. I have to point this out since a lot of people tend to rely on our example controllers. These are very basic and not meant to work in every possible game. This would be impossible since every game has its unique requirements. The controller source code can be found in the package as well. So in your case, the controller could check the distance to its target and decrease the velocity based on the distance.

    If you have further concerns feel free to ask. :)

    Franz Pieper from Polarith
     
  21. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Hi there,
    Only starting to explore Polarith, and I must admit being fascinated. Steering behaviors have been an interest for years, and this tool is long overdue in the Unity arsenal!

    The complexity of the system is intriguing and difficult to tame at first :)
    Waiting avidly for 3D sensors to be merged, baffling possibilities.

    I'm finding it tough to get a sense of what is best to prioritize behaviors:
    I'm using Seek, Avoid, and Wander. I was tweaking magnitude and order of these 3 behaviors, hoping to get a "drifting seek" with obstacle avoidance: while seeking the green sphere, drift left and right slightly. I've come close, but with small changes of values, it can happen that the wander takes over and the targets are not reached, or seek takes over and I don't get the wandering effect. Interpolation and Stabilization can help or hinder!
    Is the use case of one behavior slightly modifying another one possible, or is the tool better used for one behavior to take over?
    Is there a rule of thumb as to the effects of order vs magnitude without the maths blowing my mind?
    Thanks for your time.
     
  22. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @BCFEGAmes,

    thanks for the nice feedback :)

    You are right, it can be quiet difficult to find the perfect parameterization.

    After thinking a while about your problem, I came up with three possible approaches. First, you might try using AIMFollow instead of Seek because Follow has a constant magnitude output like AIMWander where AIMSeek depends on the proximity to your target. What I currently imagine, based on your description, is that Wander takes over if the agent is far away from the target and vice versa.

    The second approach could be to take a step back and let another Component control the MagnitudeMultiplier of AIMWander randomly or based on a smooth function. This way, you ensure that there are times when Seek can do its job and when Wander can initiate the drifting.

    The third approach would be to use a second objective for AIMWander. This objective could be minimized with a fitting constraint while Seek and Avoid are applied to the first unlimited objective. The Idea is that AIMContext always prioritizes a decision towards the target and AIMWander excludes possible solutions via the constraint. For example, the target is on the left and such is the maximum magnitude of the first objective. Though, AIMWander generates magnitude also to the left. Thus, this direction violates the constraint and agent needs to move forward or to the right.

    I hope you'll find these ideas helpful. I think this is an interesting problem you want to solve and I would be interested to hear which approach works best. Also, I can Imagine it helps other users too. :)

    Franz Pieper from Polarith
     
  23. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Thanks for the fast reply, a hunch was pointing towards context as the missing link, I need more time to explore it properly, and probably need to look into the starting tutorials before digging deeper.

    What you imagined for solution one is exactly what I was seeing, having two targets to reach, Seek would successfully reach the closer one with some wander, the second would be on the edge of outer radius, and the magnitude of wander would overwhelm it. It was becoming too fine grained as a viable solution, increasing radius was also adding unpredictable results.
    I'm currently not ready to add external scripts directly manipulating Polarith parameters as I've not looked at the API extensively, I'd also need to check for distance to targets separately. The third solution seems promising, but info on the use of constrains is difficult to retrieve beyond the reference. I'm assuming it's defined in context objectives list with the min, norm, and unlimited check buttons, using the label of the specific behavior. All screenshots appear to indicate that only the higher level Interest and Danger are addressed in context, but, if I understand your suggestion, by using a specific behavior label, one can apply a constrain to a specific AIM component?

    Thanks again, and best of luck with the merge.
     
  24. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    I think I need to look for more info on creating new objectives!!!

     
  25. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Your first guess is right. I talk about the objectives defined in AIMContext with min, norm, unlimited and constraint. Your second guess shows that my instruction was improper because it led to a misunderstanding. Thus, I want to go a bit deeper.

    AIMContext defines a so-called multi-criteria optimization (MCO) problem and solves this problem in every update step. An MCO has n objectives which may or may not contradict each other. Thus, they are hard to solve, and there might be multiple solutions of equal quality. The sensor defines the problem space. Every receptor (direction) is a possible solution.

    The AIMBehaviours' task is to sample the environment for each receptor and their respective TargetObjective. All Behaviours act with respect to their specific context. You cannot have a Behaviour on Gameobject A writing to the Context of Gameobject B. Furthermore, you cannot have specific constraints for AIMBehaviours since objectives are a global concept within a Gameobject. As soon as all Behaviours are finished writing to their respective objectives. AIMContext has the aggregated data of all Behaviours which is the MCO Problem for the current time step.

    Now AIMContext solves the problem with these objective values using the epsilon-constraint method. The unlimited objective is the one we want to optimize (min, max) under the given constraints of the other objectives. For example, we have an interest (max) and a danger (min) objective. Danger has a constraint of 0.5. AIMContext now iterates all receptors and looks at the danger value, if it is greater than 0.5 this receptor (direction) is omitted and cannot be the solution. If the value is less than 0.5 the receptor is a possible solution. Of course, we take the maximum of the possible solutions.

    I hope this insight proves helpful. You may also have a look at this manual page if you missed it. :)

    Franz Pieper from Polarith
     
  26. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    Hi @Polarith,
    How to get all percepts in range of outer (and inner)radius of an AIMSeek by available method of Polarith without recalculating. Also I want a methob to discover when AIMSeek is processing and when is not (no object near AIMseek) as an event.
    Could you give me an example to use the method IsNearBounds and GetBoundsSqrDistance
    Thanks,
     
    Last edited: Jan 12, 2019
  27. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Kirshor,
    Unfortunately, you won't get this out of the box. You need to write your own customized version of AIMSeek, Seek and for Inspector usability an appropriate editor, otherwise you would need the Sourcecode Version. But don't worry it's not that hard for your needs :)
    We provide a section in our manual for this.
    Additionally, we made some of the free classes publicly available on Github to support you guys in writing your own behaviours. I highly suggest having a look at the manual, AIMSeek and Seek before start coding.

    What you need to do:
    Create a script(MySeek) and inherit from Seek.
    Add a List<SteeringPercept> or something similar to store the active percepts.
    Override StartSteering(). Call the base class' method and check the result: if true, the percept is in range (you may add it to the List). Return the result (bool). If your List is empty, there is no percept in range.
    To reset the List after each step, you need to override Behave(), clear the List and call the base class' method.
    Now, you simply need to create a script(MySeekComponent) for the front-end class. Copy the code from AIMSeek and rename the class, it's usages and the used back-end class. Voila. Now, you can access the List via MySeekComponent.MySeek.

    I am confident you will master this task :)

    Rock your weekend,
    Martin 'Zetti' Zettwitz from Polarith
     
    Last edited: Jan 12, 2019
  28. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    Screen Shot 2019-01-13 at 5.02.30 PM.png Hi @Polarith,
    I saw nothing in the scene when using Planar Shaper. I could not customize it. help me please. Thanks.
    Unity version 2018.3.0f2 View attachment 358138
     
    Last edited: Jan 13, 2019
  29. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi again,

    that's odd. I tested with the same version by just opening the SensorLab scene in the package which displays the handles as expected. Manipulation works as well.

    Is it possible that the selected sensor has no receptors at all? Then, of course, nothing is displayed. You may also click on Create Sensor in the inspector as seen in your image.

    If this does not work, you might try to check the SensorLab scene with a clean package just to make sure that it works at all. If not we'll have to figure out something else. :)

    Franz Pieper from Polarith
     
  30. Kirshor

    Kirshor

    Joined:
    Apr 14, 2015
    Posts:
    24
    It works today with the same sensor i created yesterday, just open Unity and I saw the sensor. :). I think this bug is from Unity (just restart the Unity to solve this issue). Now everything works very well. Thank you very much.
     
  31. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Hi Again, some progress today!
    Managed to create an extra objective, called wandering, and have the wander behavior act on this objective, which has a min constraint working nicely.
    So as the agent seeks, it's also affected by the random wander behavior, drifting like a drunken driver to target, while actively avoiding objects, and a danger objective for passive avoidance for good measure!

    There are two new things that I'm unable to solve:
    • Trying to use the Wandering objective for "Objective as Speed" of the physics controller, makes the agent move very erratically (not move!) it would appear that wander doesn't in effect have much of a forward velocity, being mainly about rotation.
    • Applying stabilization to the wander objective is fine, but some erratic behavior occurs if I apply it to interest, it looks like wander comes through from the outset, but is heavily skewed to one side.
    What I'm basically trying to achieve now is for the wander behavior to come through when no seeking is happening.
    I think I'd probably need a state machine with Wander as the Unlimited behavior if no seek is happening, and switch to the new context when aa seekin target is identified, plus "drifting".
    Please let me know if anyone on your side has a moment to take a look at the setup, out of curiosity!

    Last surprise is that the wander parameters for angle and time seem to become almost irrelevant once a separate objective is used. Constraint in the context becomes the dominant parameter... Will upload a video soon.
     
    Last edited: Jan 14, 2019
  32. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36


    Video showing drifting seek: The fact that a constrained objective only works in respect to the unlimited objective is a difficult concept to get one's head around, as powerful as it is.
    In the 3D Deadlock example, if the object has no Interest target to seek, the passive avoidance using seek and danger objective has no effect! Avoid would be used in that case I assume.
     
  33. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Martin Zetwitz, [15.01.19 17:01]
    Hi @ BCFEGAmes!
    Nice to see the first outcome.
    Objective as speed uses the magnitude from the decided direction, that is based on the objective that you want to maximize ('unlimited'). Since your objective 'wander' is a constraint to interest, it is pure luck if the decided direction is aligned to a (high) value in the wander objective, if not it won't move.

    Pretty interesting, but it makes sense. Have a look at AIMStabilization, as the almighty manual says: "a magnitude is added where corresponding receptor directions match the (local) DecidedDirection." So stabilization on wander should not have a significant influence, while on interest it will keep the agent more stable. Since the values for AIMWander are fixed for the time interval, the decided direction based on interest will be kept, while wander constraints it (from a single side where wander is present). So it looks a little bouncy.
    You can use a state machine, but if you are happy with your current behaviour, you can simply add another AIMWander attached to interest. Expand the menu and set ValueWriting to Addition and the Magnitude Multiplier to a small value, e.g. 0.1. Now you have a little a little magnitude for wandering around that has only a significant influence while no seek (or other stronger behaviour on interest) is active.
    The constraint is always the most important part for every objective that is not maximized since it's the last instance to be checked after value mapping and writing. In your particular setup, this based on the fact that you have AIMStabilization on wander, so there is always a value in the direction of movement ;) Try it without AIMStabilization and see the impact.
    Correct, there is no effect of constraints if there is nothing that they will affect, so in fact, you need a behaviour writing to interest (like AIMAvoid or AIMWander as I suggested above).

    Happy to help.
    Martin 'Zetti' Zettwitz from Polarith
     
  34. Eck0

    Eck0

    Joined:
    Jun 6, 2017
    Posts:
    48
    Hello again, I have been trying different ways and honestly I think that this code is not ready to handle the troops, there are many problems when placing the troop on a specific target, you should have a project with this example if it is true that this is possible, if you want I leave you my project so that you can fix it and put it as an example ...
     
  35. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hello there,

    Can you specify what problems exactly occurred? The problems we 'found' in your video are the agents that won't stop at the target and crash into each other (like you described before).

    Like Franz said, you need AIMArrive to reduce 'Interest' when arriving the target and use a controller that's speed depends on the magnitude of 'Interest' (like our example controllers do).

    To prevent agents from crashing into each other, keeping distance respectively, you need avoidance like AIMSeek (passive) as shown in many of our examples. AIMSeek perceives the other agents and writes values to 'Danger'. At last, you need to finetune parameters like the constraint in AIMContext and the radii of AIMSeek. I've created a simple video just to check if we talk about the same thing. Note, that the wobbling of the agents is caused by the rudimentary controller and setup that wasn't well parametrized at all.

    group arrive2.gif


    Hope to help
    Martin 'Zetti' Zettwitz from Polarith
     
    hippobob likes this.
  36. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Haven't had this much fun in AI for a long time, looking forward to getting my hands on pro, hopefully just in time for 3D sensors!

    On that note, it was quiet the surprise to find out planar sensors can handle tilted planes no problem!


    Here's my "drifting" agent wandering, successfully clearing a landscape of interest targets, while avoiding danger objects with a separate objective constrain and avoid, all with a drifting objective constrain making it swivel!
     
  37. Mohamed-Anis

    Mohamed-Anis

    Joined:
    Jun 14, 2013
    Posts:
    94
    @Polarith is there an example of this working in 3d space? Thank you!
     
  38. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    @BCFEGAmes,

    we are curious about your results, but unfortunately, your video link seems to be broken.

    @Mohamed-Anis ,

    Not yet, we are still working on the release for the 3D upgrade, if you think of a 3D scene like a space shooter. Otherwise, if you think of a 3D game that is on a ground plane (like many RPG, Shooter, Racing Games do), it is possible, and there are examples in the package.

    Have a great day,
    Martin 'Zetti' Zettwitz from Polarith
     
    Last edited: Jan 22, 2019
    Mohamed-Anis likes this.
  39. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Fixed! I think...
     
  40. Mohamed-Anis

    Mohamed-Anis

    Joined:
    Jun 14, 2013
    Posts:
    94
    Hey @Polarith thanks!

    The package looks really good with a well thought out API! But yeah.. looking to use it for non grounded purposes :(. I'm still on free and will take another look when the 3d version is released!
     
  41. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Hi again,
    Still learning the subtleties of Polarith, I have found a 3D pathfinding solution that I'm hoping to integrate once I get pro, and spherical sensors!

    A quick question on wander behavior, I noticed a peculiar thing when I try the simple Lab 2D scene:
    I was trying to visualize the interest objective of wander, so I turned off the Physics Controller 2D to have a static agent. I then noticed that upon pressing start, both the green receptors and the yellow solution gizmo would jump around erratically, at about 30 degrees to the right, irrespective of wander parameters of Time and Angle Deviation. If I turn on the Physics controller, and turn it off, the behavior starts acting as expected.

    I'm asking as this is similar to what I get in my Drifting behavior, where the Deviation parameters are ignored. I hope my explanation is understandable.
     
  42. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    It would appear that setting speed to 0 has the same result, if set before play.
     
  43. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @BCFEGAmes,

    you already got it. :)

    That's caused by the internals of AIMWander, since it's not computing a movement direction like the other behaviours. AIMWander needs at least an initial movement direction to assign the Wander characteristics (angle deviation).

    Have a great day,
    Martin 'Zetti' Zettwitz from Polarith
     
  44. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Thanks for your input Martin!

    Using Sensitivity Offset and the constraint value on my "drifting" objective to calibrate the range of drifting works really well! I don't need Wander as such, as the Context Solver does a great job at adding alternate left/right momentum from any correctly aligned behavior. The picture below is of two follow behaviors on the same target as two separate objectives, one with a constraint of 0.2.

    Drifting.jpg

    I don't mean to hog the board with my Drifting thing, but as I'm getting a sense of how Constraint objectives work, I can imagine the case for a few simple behaviors similar to processing behaviors, targeted specifically to constraints.

    A bit like an LFO used on a synth, where the frequency of the LFO is used to modulate other parameters. I believe that is somewhat the case already, but at present fine control of the frequency of such modulation is automated by the solver, and using context update frequency messes up other behaviors. Getting direct control of waveform, amplitude and frequency would allow to do things like fish swimming left right while pursuing, or discreet lunges forward, and more.
    This might be more related to locomotion solutions as opposed to steering but felt compelled to share!
     
  45. captnhanky

    captnhanky

    Joined:
    Dec 31, 2015
    Posts:
    97
    Hi!

    I am a beginner in unity and can barely code. Anyway I managed to get my game environment (a 2D sidescrolling shooter) working, including a endless scrolling routine.
    What I am missing is a proper enemy behavior.

    In my humble opinion the best 2D enemy behaviour in an action shooter ever created is from the williams defender arcade machine.

    My question is..
    Is your polarith AI framework capable of implementing this?

    Basically there are 6 different behaviours in williams defender arcade game:

    1. Bomber..does flying only and leaves some bombs on their way
    2. Lander ..Is the most interesting behaviour..appears out of thin air and does flying mostly near the ground and shoots at the player (not precise).. BUT suddenly captures a human in a random moment and tears it to the sky ..in that phase the player can shoot the lander and rescue the human in the air ..but when the lander with the captured human reaches the sky it becomes a mutant.
    3. mutant ..is the most aggressive enemy.. it flies directly to the player (in a little zig zag to avoid the player fire) and tries to collide with the player, also it shoots at the player at the same time
    4. Pod ..does nothing but when hit it releases 4 to 5 swarmers
    5. Swarmer.. is part of a enemy group (the swarmers) which all only can go together in one direction, they can change direction but only when they long passed the player
    6. Baiter ..this guy is very nasty because it appears out of thin air when you are not fast enough in clearing up the level. It is not part of the level..it is only there to terrorize you.. it is faster than you, you can not escape.. you have to shoot it, it goes in a zig zag with impulse like direction changes ..and there is no limit in the amount of them appearing.

    Not to forget that the enemies can calculate your position when you are moving and are firing in front of you to hit you (this alone is fantastic)

    All this has be be lightnig fast.. It has to run on a mobile phone with max android kit kat on it.

    I post a youtube video to illustrate this special behaviour.
    If implementing these behaviours (or at leased most of it) is possible than I would be happy to buy your pro version.. if it is needed
    ..If all this is not possible with your framework, maybe you could point me in the right direction?

    Thanks for attention!

     
    Last edited: Feb 2, 2019
  46. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @BCFEGAmes,

    nice to see you found a better solution. Additionally, you can play around with the sensitivity offset of the second seek.
    I like your idea, but I got concerns about realising this since one would need to add values to the solution that could violate constraints. If they wouldn't violate them, the movement wouldn't have this sinus like trajectory in case of a danger obstacle that would affect the movement. I think this is a thing that one should handle directly inside the controller since it will affect the movement and not the decision of a certain direction. Things our AI takes care is more related to 'how the agent would behave (to find it's target direction)' and not how to move (in sense of physics).

    Have an awesome week!
    Martin 'Zetti' Zettwitz from Polarith
     
  47. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @captnhanky,

    our product is about multi-criteria optimisation, i.e. about having multiple competing objectives that should be optimised at once to decide (naturally) for a certain movement direction. Your game sounds like a direct movement solution would fit better, since most of the situations you describe are based on state machines and simple vector mathematics (like shooting into movement) or flying zig-zag.

    Nevertheless, most of the things you described should be possible, like chasing (and additionally avoid other enemies). Things like zig-zag would best be handled inside the controller, what would require (simple) programming skills, but you can try to use the solution presented in post#344.

    You can try to use our free version to test if this will fit your needs, at least for the basics it should.

    Happy exploring,
    Martin 'Zetti' Zettwitz from Polarith
     
    Mohamed-Anis likes this.
  48. captnhanky

    captnhanky

    Joined:
    Dec 31, 2015
    Posts:
    97
    Thanks for clarifying!
     
    Polarith likes this.
  49. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    3D Patch Incoming (for real)



    Hey friends!

    Just letting you know that we are in the final steps of completing our long-awaited 3D patch! And guess what - we will take you along on our journey over at Twitter! Enjoy some insights to our process. There's already some goods waiting for you and there's plenty more stuff to come. Let's chat about 3D on our profile - see you there!

    https://twitter.com/polarith

    Oh and here's the new 3D sensor by the way.



    Martin Kirst from Polarith
     
  50. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    A Massive Congratulations on the impending 3D steering launch!
    Exciting times for games AI.

    Quick question if I may, I guess it's going to be part of the upcoming formations, but on keeping with the original flocking algorithm, is there a suggested way of calculating the local center of a cluster of Interest targets, to have dynamic cohesion behavior? Using addition and multiple targets, it gets close, but eventually seek commits to one target, building a procedural collider for a group, and then getting it's center seems a little overkill.
    Are there plans for such a canonical behavior, do you suggest alternatives?
    Once more, stoked by 3D spherical sensors, looking forward to playing with it ;)