Search Unity

Polarith AI (Free/Pro) | Movement, Pathfinding, Steering

Discussion in 'Assets and Asset Store' started by Polarith, Apr 18, 2017.

  1. jmacgill

    jmacgill

    Joined:
    Jun 17, 2014
    Posts:
    17
    Oooh the manual page animations have been updated with 3d sensors and movement :) Must be getting close...
     
    Novack likes this.
  2. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
  3. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Let me just leave this here.

     
    one_one, Korindian and Novack like this.
  4. Novack

    Novack

    Joined:
    Oct 28, 2009
    Posts:
    844
    Great work fellas, looking forward!
    And very nice video, it talks about hours upon hours of rehearse work.
     
  5. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    As promised we have committed our update to the Asset Store. Now it is up to Unity to approve and publish. So play the waiting game with us!

    20190910-IMG_8022.JPG
     
    NS24 likes this.
  6. NS24

    NS24

    Joined:
    Feb 21, 2018
    Posts:
    15
    That's how I look :D
    Been checking every day.

    Meanwhile is there any updated docs/web pages, or are you waiting for this to appear on UAS first?
     
  7. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @NS24,

    The updated docs are already online. As some of you might have noticed, there is a little 1.7 version tag. You are welcome to discover the new features from the package and the behaviours in the docs :)

    Happy exploring,
    Martin 'Zetti' Zettwitz from Polarith
     
  8. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    3d-patch-title.png

    Hi!

    Just letting you know that the thing is online. We wish you loads of fun with the new features!

    Here is the trailer to convince the unconvinced :)

     
    Korindian, Vincent454 and one_one like this.
  9. jmacgill

    jmacgill

    Joined:
    Jun 17, 2014
    Posts:
    17
    Ooooh I was going to ask you to integrate a PID controller... and you did it in the examples in release already!
    (Was messing about with PID controllers for spaceships in prep for 1.7)
     
    Polarith likes this.
  10. jmacgill

    jmacgill

    Joined:
    Jun 17, 2014
    Posts:
    17
    Loving the update - your code is, I have to say, very well commented - make a huge difference.

    Quick question... does AIMReduction work in 3D? I'm having trouble getting it to work :(

    [edit] - to answer my own question... The issue was with SpaceshipController, the DecidedMagnitude always seemed to be 1.0. Changing the force calculation to the following worked for my use case:

    force = new Vector3(0.0f, 0.0f, this.Context.DecidedValues[0 /* interest */] * thrustMultiplier);
     
    Last edited: Sep 16, 2019
  11. jmacgill

    jmacgill

    Joined:
    Jun 17, 2014
    Posts:
    17
    In the documentation for Reduction you leave a tantalizing comment "For instance, when it is used together with Align or Adjust, you can create decent swarming behaviours for your agents. (We will definitely write a blog post about this.)"

    Please do so!
     
  12. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    20190925-IMG_8429.JPG

    Hi @jmacgill,

    Fair shout! We've been neglecting our blog for some time, but it's on top of our to-do list. In the meantime check out the boid-scene in our package - could solve some issues you might have with this topic!

    Happy coding,
    Franz from Polarith
     
    jmacgill likes this.
  13. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    Hi,

    I am trying to modify one of your samples to get an agent to navigate a maze without using a NavMesh.

    My understanding is that using PlanarSeekBounds it should be able to use a planar sensor to detect a Mesh Collider, however both Agents (one with Physics and one without) in the picture go straight through the orange maze to the green dot

    I have set up the Maze/Mesh Collider as an Environment in the Steering Perceiver, and used PlanarSeekBounds using the Visual Bounds Type.

    I must be doing something wrong, or maybe this is not possible. I have DM'd you a copy of the scene so I hope you might be able to look at it.

    cheers

    Nalin

    Screenshot 2019-10-07 at 09.45.34.png
     
  14. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi, @puzzlekings!

    Unfortunately, this is not how our bounds behaviours work. For the other readers: the scene he send us consists of a single big mesh with the occluders (that you can see on the image above) inside. So the occluders are subparts of a big mesh. As you can read in the docs, that even with visual bounds, we process the bounds like the OOB version. Since bounds are just cuboids(center and extends), there is no way of using them as concave (or even convex) meshes, but only for bounding boxes.

    You need to set them up separately for each object (or at least for each bounding box) you want to avoid. We would love to obtain a more detailed model, but that's not how Unity's bounds work. Additionally, I have to mention that a high detailed model with high precision avoidance would be extremely costly in terms of computation power since many raycasts to sample the surface would be necessary.

    Sorry for the bad news,
    Martin 'Zetti' Zettwitz from Polarith
     
  15. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    Ok thanks @Polarith - kind of what I thought might be the case :(

    Is this still the case with the 3D Spatial Sensor?
     
  16. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Hi there,
    Continuing on the above conversation, am I correct that a "containment" type behavior, for a procedurally generated cave with whole mesh non convex colliders, would not work using bounds? Would such a collider be usable by your system at all, or would one need to raycast manually, implement turning at a different level?!?
    Thanks,
    Sergio.
     
  17. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi, @puzzlekings!

    It is the same since the bounds in Unity work the same either in 2D or 3D. Since your scene seems to be 2D-based, maybe you can put an invisible NavMesh on the ground and use Seek/Flee NavMesh instead?


    Hi @BCFEGames,

    edit: fixed the name reference

    Nice to hear from you. How is your research going on? :)
    If your cave consists of multiple colliders, that approximate the surface of the cave, it would be possible. If you have a single mesh (-collider), you are totally right: a raycast system would do the job. Well, I've already coded a little script that's similar. :)Have a look at SeekGround in the Pro package. I've added an invisible GameObject to generate objective values with the attached behaviour. You would need to alter it to perform raycasts in the direction of each receptor of the sensor. Alternatively, you can write directly into the underlying backend behaviour to get rid of the GameObejcts, but it's way more complicated.

    A hint for everyone: Note that you must set the Bounds Type to OOB if you want to perceive a collider. I think people are often a little bit confused since we use the colliders for AAB and OBB. Hence, people use them for Visual as well. Note that Visual only needs the mesh/sprite renderer (that's why it does not process the mesh collider at all). So be vigilant, guys :)

    Have an awesome weekend,
    Martin 'Zetti' Zettwitz from Polarith
     
    Last edited: Oct 12, 2019
  18. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Thanks for the detailed reply, as usual! All is well in research land, but slow! Looking forward to experimenting some more!!!
     
  19. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    @Polarith

    I think there could be a memory leak related to the SeekNavMesh component.

    If you look at this video, when I enable the AgenSeekNavMeshSimple object, you can see that the Mesh number in the profiler starts to rocket. I also noticed that when I then stop the playback it takes a while for it stop stop i.e. I get the spinner icon on the Mac - sometimes I need to shut Unity down. I have reproduced this in several scenes including a NavMesh version of the scene I posted previously.

    https://www.dropbox.com/s/8pczzlyb4yda6m9/NavmeshLeak.mov?dl=0

    After a bit more digging this seems to go up when I'm looking in Scene Mode, so maybe it's something to do with the Debug Visualisers?

    Would appreciate if you could investigate - had DM'd you this scene.

    Separately I could use some advice as to the best way for an agent to Steer using a NavMesh to hit a series of waypoints. I have been experimenting with Physics/NonPhysics to find the best way (in this scene), and also explored various components, but the process so far has not been smooth.

    I have tried using SeekNavMesh with a filtered environment for the interest on its own, but it does not seem to do a very good job and struggled to get out of the starting area. That's when I thought the Waypoints might be an idea, however I'm confused how this is meant to work with the SeekNavMesh component.

    Any advice or suggestions would be great :)
     
    Last edited: Oct 21, 2019
  20. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi, @puzzlekings!

    We are going to investigate the leak. My guess is the Gizmos, too.

    In general, I would recommend a physics-based controller since it prevents sharp turns, that the simple controller does. There are two settings to adjust. First, of course, the properties of AIMSeekNavMesh: radius and radius mapping type. Second, the TargetRadius of AIMFollowWaypoints. The waypoint is marked as visited if the agent enters the radius. Thus, a greater radius results in smoother trajectories since the agent does not need to aim for the exact point, or at least is not forced to reach ist.

    Both components work as usual. FollowWaypoints is nothing else than a simple Follow behaviour with the next (ordered) waypoint as target. Thus, it should write in interest. SeekNavMesh is like the common Seek: it is a RadiusSteeringBehaviour, and thus, works exactly like Seek. The only difference is that it computes the objective values with respect to its NavMesh feelers. Thus, it's not looking for objects, but the NavMesh boundary. Hence, it should write in danger (for avoidance).
    The collaboration of both components is the same as every other behaviour.

    Besides the waypoints, you could use Seek(interest) instead, if you are looking for collectables. This way, you are not limited to the order of the waypoints. Keep in mind that the seek radius must be big enough to have all targets in range.

    Have a great Wednesday,
    Martin 'Zetti' Zettwitz from Polarith
     
  21. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    Thanks Martin!

    I tried in the scene I sent you, to get a physics controller working, which uses AIMSeek and AimSeekNavMesh to try and find a way through to those green collectables. However if I put the Agent SeekNavMeshPhysics in the top left (starting area) of the map it seems to end up getting stuck and never getting out of the starting area.

    I must be doing something wrong, so please if you get time, I would appreciate it if you could check and see?

    cheers

    Nalin
     
  22. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @puzzlekings,

    The interest was not attached to the steering perceiver (environment was not set, therefore the outline was set to interest :D). Thus the agent was not able to perceive them. In case the interest is hidden behind large or concave obstacles, a path is preferred since your setup does not use separated colliders. The problem is that the agent must move in the opposite direction of the interest target. You can try a workaround for these concave parts by placing there a gameobject containing a steering tag with a radius. Thus, the agent will steer around, and the object is still concave.
    I've also looked at your path follow agent. It looks good in general, but the starting point was set to index 5, which caused problems due to positioning.

    I have investigated the problem. I've got bad news, but also good news for you. Bad news first. We can not fix it. Well, we could, but then you have everything in your build. What we think is even worse. The problem, as you have guessed, is caused by the debug visualization. We use Unity's sphere gizmos. In fact, they cause extreme memory leakage. Today I've come across the same problem while I was generating graphics for my research. It seems Unity does not care about its gizmo garbage. I was wondering about 6GB memory consumption caused by Unity. After closing Unity, the memory has not been freed.
    Now to the good part. It's not in the final build at all. If you want to profile the memory consumption of your project, just disable the gizmos (this should be done in general while profiling). The gizmos should be handled with care, and only activated if needed.

    Detective greetings,
    Martin 'Zetti' Zettwitz from Polarith
     
  23. puzzlekings

    puzzlekings

    Joined:
    Sep 6, 2012
    Posts:
    404
    The interest was not attached to the steering perceiver (environment was not set, therefore the outline was set to interest :D)

    hmmm - not quite sure what you mean, as this is set as per the attached image i.e. Environment Interest is a child of Steering Perceiver, and it is setup to use the Interest Layer, on which all Interest, objects have been set.

    EnvironmentInterest.png

    ...what have I missed exactly?

    a path is preferred since your setup does not use separated colliders.

    that's kind of what I thought, so I'm thinking probably to use the UnityPathFinding component.

    .. in which case I'm wondering whether this should be used in conjunction with FleeNavMesh (to detect the edges) or SeekNavMesh
     
  24. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @puzzlekings,

    Sorry I made a mistake: I meant the layer instead of the environment. The layer has not been set in the interest game object but in the outlines. Maybe this is an error from importing the scene. However, it took me a while to encounter the problem.

    forum-layer.png

    You can use the path as global direction, and SeekNavMesh for local avoidance since the path may be close to edges, and SeekNavMesh will prevent the agent from moving too close to them. Additionally, the TargetRadius of the in FollowWaypoints can be increased, so that the agent does not need to pass exactly through the center.

    Have a nice start into the week,
    Martin 'Zetti' Zettwitz from Polarith
     
  25. nicholasboshoff

    nicholasboshoff

    Joined:
    Oct 24, 2019
    Posts:
    3
    Hi Polarith

    Excellent work!

    I really hope this will work for what I am working on a 3D Tower Defense all though the pathfinding should still work in 2D as it will be on a flat surface moving from Spawning location to waypoints in order example waypoint 1 and then to waypoint 2 before it can go to the end.

    For the most part it works like a standard Tower Defense only that the players are able to build a maze around the waypoints for the waves to run through towards each waypoint before moving to the next waypoint nothing new in the concept but the curve ball is that during runtime the player has the option to juggle the wave by opening and closing parts of the maze that needs to change the path for wave to run in.

    What needs to happen when the player juggles is that the wave immediately changes its path but turns naturally and then only starts to move in the new path found and to also calculate other units from the wave in its path but not calculating it as obstacle but more like something it has to follow until a faster route opens up "the units speeds might differ for instance the 1 at the back might have a speed buff and will continuously try to overtake the slower units" standard TD's normally lets units move through each other so this scenario won’t be a problem then.

    I have 2 concerns:

    1. Will this work for the scenario mentioned above?

    2. If this will work how will the performance be during multiplayer for instance 12 players each with 20 units per wave changing the pathfinding at the same time.

    I know tower defenses is the most basic of path finding but haven't seen you mention anything in that direction.

    Also, why I would like to apply your work to a TD is to have advanced movement that TD's never have and at the same time use the best path finding I can get.

    Thanks for your time and looking forward to your answer.
     
  26. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @nicholasboshoff,

    Indeed, the setup will work in a so-called 2.5D setup. The path calculation is not done by our system, but we are compatible to other systems, e.g., Unity's pathfinding. The 'natural turn' is done by the controller. You can use our in-build example controllers, or if they don't fit your needs, code your own since they are highly game-specific. The overtake is possible in case the maze is broad enough. You can test the basic features with our free version. I recommend building a simple corridor with a broader part where the units can overtake. You need to attach AIMFollow to the agents so that their main target is in front. Additionally, you need a second behaviour for the overtake. You should start experimenting with AIMAvoid and AIMSeek, but might be helpful, too.

    Happy exploring,
    Martin 'Zetti' Zettwitz from Polarith
     
  27. nicholasboshoff

    nicholasboshoff

    Joined:
    Oct 24, 2019
    Posts:
    3
    Thanks for the reply Martin and I am test with the free version and will try your tips.
     
  28. Eck0

    Eck0

    Joined:
    Jun 6, 2017
    Posts:
    48
    Hi, I still can't use your product since it doesn't meet my requirements, are you going to fix the collisions wuth unit controllers? I need an RTS and this is not compatible with this modality ...
    and please put an rts scene in the project
     
  29. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hello @Eck0,

    Unfortunately, there was no response to my last post so that we can make your project work. As I demonstrated, there is nothing to 'fix' since there is no bug, but non-optimal setups or parameters. Again, you need a setup consisting of a target object behaviour (in your case pathfinding or AIMArrive), a slowing behaviour (like AIMReduction), and avoidance (like AIMSeek, AIMFlee, or AIMAvoid). Additionally, your controller's speed needs to adjust to Context.DecidedValues[ObjectiveID] (ObjectiveID = 0 in case interest is your first objective). Our demo controllers apply this by the property ObjectiveAsSpeed.

    Best regards,
    Martin 'Zetti' Zettwitz from Polarith
     
  30. RyanJVR

    RyanJVR

    Joined:
    Jan 15, 2014
    Posts:
    9
    Hi,
    I'm about to purchase AI Pro for a project I want to start in the new year but just wanted to check if it will function with the new DOT's workflow unity are introducing? I know your plugin seys its already multi-thread ready, just wanted to check if there would be any problems or support for it?

    Also out of curiosity roughly how many agents with 3D spherical sensors have you managed to successfully run in a single scene before the performance drop becomes unacceptable or noticeable?

    Thank you
    Ryan
     
  31. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @RyanJVR,

    we are not only multi-threading ready, we already support it by our AIMPerformance component. We haven't tested it with Unity's new DOTS system though. The job system and burst compiler look like a major rework for our system. Maybe, you guys in our community can share your experiences with the features like the ECS.

    This strongly depends on the setup and the sensor resolution. Fortunately, I can share a table from my latest research project :) Note that the measurement was done using Unity's Profiler, and thus, underlies an overhead. Therefore, the system is faster in build mode. In most cases, the low-resolution sensor will be sufficient, and since you have control over the sensor resolution, you can adjust it to your needs.

    tabelle.png

    Have a great day,
    Martin 'Zetti' Zettwitz from Polarith.
     
    RyanJVR likes this.
  32. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    As I already brought this up in the past, I thought I'd chime in: First of all, an unfortunate effect of mixing DOTS multithreading with standard .NET multithreading is the possibility of threads getting in each other's way and losing performance to CPU context switching. AFAIK, DOTS is avoiding those issues internally.
    On a positive side, I think Polarith is an excellent fit for DOTS: Data types aren't highly complex and interlinked, things are already nicely modularized/working with composition and many operations/methods are run numerous times per frame, especially when there's a lot of agents.
    Now, DOTS is still moving quickly, particularly the ECS part of it and there's still a lot of questions that need to be solved. Not to mention quality of life + authoring issues - but they're tackling that now by starting to improve the GameObject/MonoBehaviour-to-entity conversion process. Either way, I also assume that switching to ECS would mean a major rewrite, which I totally understand is not feasible for a project that is more of a passion than a cash cow. However, the easiest way to get started with DOTS is probably jobs - particularly because they work very well in isolation. If there's a piece of code that is very performance intensive but can be nicely run in parallel, turning it into a job (ideally with burst compiling) means you'll only need to touch that specific portion of the code - and it can still result in mind-blowing performance benefits. I'm sure there's a couple of good candidates in Polarith, distance checks perhaps? Personally, I hope you'll give jobs a chance and just try them out once in a performance-critical part - then stress test and see how much of a hassle it was and whether it paid off. I'm quite convinced you'll like it ;)
    As for burst: You don't really have to do a lot to make burst work. Just use Unity's new data types (not as bad as it sounds, essential data types like Vector3/float3 are converted implicitly) and their new math library, slap the burst compile attribute onto a method/job and boom! This video is a good introduction with a practical example.
     
  33. RyanJVR

    RyanJVR

    Joined:
    Jan 15, 2014
    Posts:
    9
    Thank you for the feedback. I look forward to working with the plugin.
     
  34. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @one_one,

    Thanks for your awesome and very detailed feedback. Especially the part about isolating code into jobs is quite interesting. We first saw the new concept at the Unite 2018 in Berlin and thought that a complete rework would be necessary. Indeed, checking distances and angles is an essential part of our (radius-)steering behaviours since it is performed for each receptor on each sensor for each target. So we will have a look at it. Additionally, using the burst compiler for this code sections will be quite interesting.

    Rock the weekend,
    Martin 'Zetti' Zettwitz from Polarith.
     
  35. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Yeah, that initial presentation was a lot to take in and left a lot of questions open for me. Many of them have been addressed in the meantime and things have progressed a lot, though. Unity is really pushing DOTS, both from a development side, but also from a strategy perspective. Looking forward to your thoughts on jobs + burst!
     
  36. christougher

    christougher

    Joined:
    Mar 6, 2015
    Posts:
    558
    Playing around with the 3d Sensors... *heart eyes/drooly face emojis*

    sooo... local avoidance... Lets take the CopterHall scene and say I've got any number of Agent Copters all following the same path. I want them to give each other some space and avoid bumping into each other. I've added a third AIMEnvironment to include all objects on the Player layer (which of course the copters are all on). The settings are pretty much the same as the EnvironmentDanger for now. This seems to work other then the glaring fact that now each Copter will now seek to avoid itself. How can this local avoidance be factored into crowds, whether 2d or 3d so AIs avoid each other but not their self?

    Also... BURST/JOBS! *insert obnoxious numbers of heart eyes/drooly face emojis*
     
    Last edited: Dec 6, 2019
    Polarith likes this.
  37. Eck0

    Eck0

    Joined:
    Jun 6, 2017
    Posts:
    48

    The previus example I tried it and it is not at all professional because the bodies make strange turns and their speeds are decreasing more and more and take a long time to position yourself ...
    This is not a good RTS behavior.
    If this project is true that you can get a fluid RTS movement, how is it that there is no example scene?
    Watch the Dota2 game and watch its movement and tell me if this is professional in comparison ...
     
  38. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @christougher,

    We love to hear that you like our new features :)
    Usually, this is done by the SelfObject in AIMContext. Note, that only the object that should be ignored must be inside the environment (Environment Copter). Otherwise, the child or parent objects are perceived, and thus, we bite our tail. Additionally, you can always set the InnerRadius of the Radius Steering Behaviour (like AIMSeek) to a value that doesn't cover the copter. E.g., 0.16 worked for me in this scene.

    Have a great week with tons of emojis,
    Martin 'Zetti' Zettwitz from Polarith
     
    christougher likes this.
  39. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Hi again guys,
    the Copter is people's choice for experimenting with! speaking of which, I've been messing with reduction, to try and get group behaviors, and noticed the gizmo for reduction is a circle on the ZY plane, I tried to change the projection to xz, also I do noice it effects the strenght of interest chnges, but not the "speed" at which the copters travel. Must admit I' haven't delved too deep into the controller settings for movement, it's quiet the complex set up!
     
  40. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    PS the desired effect, as per documentation, is a slow down on approaching a single "follow" target, or one of the members of an "adjust" environment entity.
     
  41. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @Eck0,
    Dota, like most MOBA games, uses flow fields for steering. There is mostly no active avoidance but an altered flow field for obstacles. Thus, the path changes on runtime, but units are simply pushed by their colliders, or, if more advanced, units move due to the updated flow field.
    However, as I pointed out earlier, I've used a rudimentary setup and controller just to make sure we are talking about the same thing. It should be clear that a demo scene that has been created for you on the fly does not look as professional as a multi-million dollar game. Of course, finetuning of the behaviours and likely a custom controller are necessary for smooth movement. The decreased speed is optional, of course, the agents can suddenly stop at their destination if you want.
    Until now, there is no RTS, or MOBA, example scene because we can not provide a separate example scene for every possible kind of game genre. We put this on our todo list for a future update. Our AI-movement is based on the combination of simple behaviours, and thus, we show scenes to understand how a single behaviour works. The game scenes we provide, demonstrate basic combinations for scene setups and our example controllers. They are intended to give you hints about our system and the usage in your game, and not to provide a copy-paste solution.

    I have created another quick example for you with the behaviours that I have mentioned before. You can find the settings in the graphic.

    group arrive.gif

    setting.png

    Happy coding,
    Martin 'Zetti' Zettwitz from Polarith.
     
    Last edited: Dec 12, 2019
    stevenwanhk likes this.
  42. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    And on further investigation, it turns out the Copter controller, does indeed react to reduction, the PID active makes things rather more complex than a simple "slow down on approach". I've actually got the copter to circle around a follow target using reduction. Getting something similar to Arrival is probably more complex. When combined with adjust on a swarm it gets rather unpredicatble, but saying it's fascinating is an understatement.

    Manipulating multipliers of a group at runtime is the closest to real steering AI I've ever had the pleasure to play with! Thanks guys :D
     
  43. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @BCFEGAmes,
    Indeed, the gizmo is buggy. I will implement a fix in the next update. If you, or any other of you guys, find a bug, you are welcome to open an issue on GitHub. But please play a little bit around and test to ensure it is a real bug before opening an issue :) Additionally, I will add an option as in the old controllers for Objective As Speed to make the speed correlating with the decided magnitude.
    What did you exactly change to make the copter controller react to the magnitude? At first sight, I would scale the pitch and yaw what is currently not implemented, so I am a little bit confused how you made it work with the actual system :D
    And as always, thanks for this motivating feedback!

    Good look to you BCFEGAmes, King of the Swarm!
    Martin 'Zetti' Zettwitz from Polarith.
     
    BCFEGAmes likes this.
  44. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Upon recreating the scene in 2D I could get Reduction to work as described in the manual, after that, upon going back to the copter scene, I noticed that by turning off PID, The copter behaved more and more erratically as it approached the Follow target with reduction on interest high, and the same game object as Follow assigned to it.
    I tried playing with projection on xz, and a few more tweeks, but not much results there.
    Then I reactivated PID, and reduced the values in Thrust, but without really knowing what was going to happen, I've just tried again, and by halfing P and I and D values, it defenitely gets effected by Magnitude multiplier on Reduction, to the extent that the copter only reaches the target if Reduction Magnitude is close to 0 ?!? Magic!
     
    Last edited: Dec 12, 2019
  45. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    @BCFEGAmes,
    Thanks for sharing your experiments. I tried, but unfortunately, I wasn't able to reproduce. I used a single copter and sphere as reduction target attached to the copter's front. By changing the AIMReduction.MagnitudeMultiplier, the movement speed of the copter didn't change, but, of course, the movement direction has changed due to the influenced context values. I also changed the CopterController.ThrustFactor and the Thrust PID values, but without any effect.
    Since interpolation of a multi-dimensional function on a sphere is highly non-trivial, we currently can only use an increased sensor resolution to obtain more accurate, and thus, more stable results. That's why we added the PID controller to obtain a smooth movement in combination with a low-resolution sensor even though the decision is not as accurate as of the high-resolution sensor.

    Have an outstanding weekend,
    Martin 'Zetti' Zettwitz from Polarith.
     
  46. ludolamezia

    ludolamezia

    Joined:
    Apr 30, 2019
    Posts:
    1
    Hi Martin
    When you say "attached" to the front of the copter what do you mean?

    In my setup copter has follow set to a target, and same target is used as game object in reduction. I agree that speed does not change, but, as you say, direction of follow is "reduced" by magnitude multiplier of reduction behaviour. And the copter ends up avoiding the follow target when the multiplier is high enough, default PID values smooth things out and it's more difficult to see reduction effects.
     
  47. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    @ludolamezia,

    I simply placed a sphere in front of the copter and attached it to the copter's game object so that transformations are applied to it. This way, the values in the movement direction are always effected by AIMReduction that has the sphere as a target. It's just simpler to play around with the magnitude multiplier and see the influence since I was expecting change of speed.
    Thanks for clarifying, I initially thought you are able to adjust the movement speed through some awkward hack :D I agree with you, that the effect of the PID controller is not trivial to understand right away.

    Best greetings,
    Martin 'Zetti' Zettwitz from Polarith.
     
    BCFEGAmes likes this.
  48. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Quick one: I'm aware one can visualize different objectives as separate colors on the sensor gizmo, is there a possibility of showing more clearly which behavior is currently active, using the sensors? For example, wander + avoid obstacles, draw the usual green bars for the wander behavior, but a different color for avoid, I ask this fully aware that in context mode one would have danger and seek, and visualize the contrast that way, but what about differentiating between different behaviors writing to the same objective? I think I saw something to that effect in the advanced sensors...
     
  49. Polarith

    Polarith

    Joined:
    Nov 22, 2016
    Posts:
    315
    Hi @BCFEGAmes,
    This is a fascinating question. Honestly, we haven't thought of this use-case in our workflow yet. Of course, this would be very useful. At first glance, this should be not possible out of the box, but I will think about it in more detail later. A quick workaround to achieve this: create an additional objective for each (combined) behaviour that you want to check and set the Constraint in AIMContext to 1, so that the objective is ignored. Additionally, create a behaviour parallel to the original one (that writes on interest or danger) and assign it to the new objective. Now, you can use the AIMContextIndicator to visualize the newly created (parallel) objectives, behaviours respectively, without losing the influence on the original objective.

    Hacky help,
    Martin 'Zetti' Zettwitz from Polarith.
     
  50. BCFEGAmes

    BCFEGAmes

    Joined:
    Oct 5, 2016
    Posts:
    36
    Hi there,
    thanks as usual for the fast reply, I had been thinking in those terms, the one draw back I can see is that any changes to the behaviors would have to be copy pasted to the "Indicator" behavior for the visualization to be correct to the actual parallel behavior...
    Good time ahead!
    Sergio.