Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Join us on Dec 8, 2022, between 7 am & 7 pm EST, in the DOTS Dev Blitz Day 2022 - Q&A forum, Discord, and Unity3D Subreddit to learn more about DOTS directly from the Unity Developers.
    Dismiss Notice
  3. Have a look at our Games Focus blog post series which will show what Unity is doing for all game developers – now, next year, and in the future.
    Dismiss Notice

simplified AI percpetion

Discussion in 'Game Design' started by BIGTIMEMASTER, Jul 6, 2022.

  1. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    I'm roughing out how some systems might work in a survival game.

    Imagine you are hunting deer. The deer should be able to perceive player and do something when it does. Let's just consider sight perception only, because any other perception methods work essentially the same.

    My initial thoughts and examples I could find all share something in common - they are like a literal simulation. Shooting out tons of linetrace/raycast, in a radius around the AI's head - a lot of instructions.

    I came up with something that works but to expand upon it I can tell it's going to get ugly fast. Would be good to get simpler.

    Suppose a sphere collider around the ai agent. Player overlaps it, send message to the ai agent. Then, ai agent shoots a single raycast to player. If it hits, player is seen. If not, something is obstructing view. This might continually happen on a timer until overlap stops.

    That is the basic idea. Obviously you can pile more gotchas on top, like doing a shotgun blast of rays and tally up total hits to determine if a threshold has been breached, or also get the players velocity , and tie the timing of the raycast to coincide with an animation (i.e. animal has head down feeding, don't shoot the raycast).

    But the idea is just check if player is in line of sight directly, rather than simulate entire cone of vision constantly.

    Anyway, this isn't a question exactly, just sharing some thoughts because this is new territory for me and I'd be interested how other developers have solved similar problems, or maybe have an opinion/concern about my idea.
     
    Last edited: Jul 6, 2022
    laurentlavigne likes this.
  2. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,158
    That's a fine approach. You start with a relatively inexpensive sphere trigger overlap check to start things off, then progress to more expensive checks such as a single raycast to the center of the player's body. If the raycast hits, you can stop there. Otherwise you can do feet and head raycasts.

    You can start a coroutine that re-checks visibility every 1 second or so until the player leaves the sphere trigger or the player hasn't been visible for long enough that the deer has forgotten about the player. No need to re-check every single frame.
     
  3. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    Awesome, thanks @TonyLi. I am just pulling stuff out of thin air so good to get some verification from somebody with more experience.

    I took my test a couple steps further just because I was having fun with it:
    I also added a check for the players velocity, and also distance.

    Like you said, I am not shooting the raycast every frame, but just per second. Of course, that can easily be adjusted as needed.

    From velocity and distance checks, I build up "fear points". Basically, first the animal might see you, go into an alert state in which is keeps watching you, and then if you are moving or if you are getting closer, it builds up fear points. When some thresholds are breached, then we might fire off reaction events. I wont do that stuff now but I think the idea is verified enough.

    Also like you've mentioned a "cooldown", I might also have a shelf-life on fear points so that if you were just chilling the animal might seem to get used to you - provided no sudden movements.
     
    angrypenguin and TonyLi like this.
  4. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    3,715
    My favorite way to implement perception is to first do the sphere collider test, secondly I determine the direction of the player compared to the enemy and do a simple dot product between the enemies forward and player's direction. That implements a cone of vision, roughly. Adding a raycast, like you mention, would probably be a good idea, since my method does not account for obstructing walls.
     
    BIGTIMEMASTER likes this.
  5. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    you gave me the answer I was just looking for, lol.

    I was thinking about making a custom, hemisphere shaped collider, but I knew some simple vector math existed that I could use instead to judge if player is too far behind the AI's field of view. Will be better to use dot product because then I can easily tweak the range of the field of view rather than have to go make a mesh for it.

    Thanks!
     
  6. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,453
    If it were me I would cycle through AIs that do a raycast from the center of the deer head to several points on the player.

    Don’t do every single unit every frame, rotate through a list and do a limited number per turn.

    this saves massively and kind of makes sense. If a unit is surrounded by other units it is going to be more distracted and less likely to see the player.

    once the unit has a direct line of sight to the player I would check it every frame. And, I would give it a float that updates every frame Based on things like how well the player blends with the terrain, how dark it is, how far away they are, rather or not they’re moving, other objects that would potentially distract the unit, and so on.

    Once it hits a threshold the player is detected.
     
    Martin_H and BIGTIMEMASTER like this.
  7. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    That feels kind of complicated to me. I want for there to be a sense of realism but if i can avoid coding a literal simulation with many different rules that's what I try to do. Because I have the entire rest of the game to make too, so if any one area requires me to write a complicated wiki that I have to refer to any time I need to tweak the system, that's too much to manage.

    Currently I am sort of simulating some of those factors you've mentioned, but all i do is change the size of the perception trigger collider. Effectively the same thing is accomplished and all I've done is change scale on a capsule. This has the added benefit that it's super easy and visual to watch play out in real time.

    I also tried implementing using the vector math's to determine the total radius that the animal can see - but there again, i found that just manipulating the scale and position of the capsule collider worked easier.

    I child the capsule collider to the animal's skeleton head bone, so when the animation makes head go down or look around you get a predictable result. You can sneak towards animal when its feeding, but if it looks up and towards you, it will see you.

    In making the decision to flee, I watch the players velocity, distance, and stance. Each contributes "fear points" and once fear points reaches a threshold, I can tell the animal to run away. Of course, what that threshold is and how each watched value contributes to it can be changed for each animal type. And same thing with its perception parameters, which is basically jsut the size of the capsule collider trigger.

    I found quite a few examples of different perception systems, but I just don't see benefit to complex simulations where the end result is not something the player will be able to predict. Basically, if in players mind the rules seem random, I don't think it's worthwhile to make complicated rules.

    My goal is that player should be able to watch the animals head to know when to sneak and when to wait. And when the player is spotted, the animal will react realistically - a curious animal might watch for a while and only flee if player is acting a fool. A skittish animal might run right away.

    So far I've been able to keep it an entirely event based system. Not watching any values per frame or even on a timer. I dont think performance is major concern because typically there would only be one animal simulated at a time, except in future i may consider something like a herd of caribou in which case it might matter if i was doing lots of raytrace and stuff like that for each animal.

    Then again, for a herd it may be possible to pretty much let a single leader make all the decisions, but I haven't thought too far into that yet.
     
    Last edited: Jul 10, 2022
  8. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,453
    I promise that isn’t as complicated as it sounds.

    Im a terrible coder and managed to get this up and running in a couple hours.

    You are just using a handful of methods multiple times and cycling them through one enemy at a time.

    And colliders are SUPER expensive compared to raycasts.

    I should mention that you of course would need to check the angle so that it ignores the raycast method if the player is behind them.

    EDIT: also worth mentioning, if it’s the raycasts that got you sweating it you need to get over it and dive in. Raycasts are everything in modern games.
     
  9. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837

    What is it about a collider makes it expensive? It is possible to discriminate what it is looking for, right? So that it only queries for specific things, and thus that should reduce its overhead?
     
  10. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,453
    There are lots of methods, and it’s hard to say for each instance.

    some of the more basic things to know is sphere colliders and box colliders that align to you 90° angles are super cheap.

    so if you have an object that needs detailed colliders, but doesn’t necessarily need it all the time, you can throw sphere on it that acts as a trigger to give it more detail.

    and of course make sure that you were are using lots of layers that are only looking for what they need to.

    lastly, never do anything every frame or physic cycle unless you need to.

    If it were me I would only use colliders for sound and smell.

    and when you do that it’s pretty simple, make the colliders made by the player as big as the sound/smell they make and the AI sensing collider as big as their senses.

    as an example a human might have a sensing collider for hearing of 10 units, while a dog has 50. Then regardless of what size you make the sound it will work perfectly so that dogs can hear you way further away.

    but again, if you really want to make it cheap just measure distances, or raycastall if you want to have obstacles effect it. But that does seem like nuking it to me.
     
    Last edited: Jul 10, 2022
  11. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    if you are measuring distance you have to do that periodically.

    a collider is also checking periodically. both a raycast and a collider can be filtered to check for specific things only. So what is the difference?
     
  12. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,453
    If you were just doing distances you can have it work as an event, or even take it a step further and have a large trigger around the player so the only enemies within the trigger will see if they’re close enough to hear it.

    personally though, that does sound like way more effort than what is needed for a hunting game.

    so I would recommend having a sphere around the player that acts as their noise level which changes size according to how much noise they’re making.

    and of course decreasing the size of the hearing collider if the deer is doing some thing like grazing.

    using spheres on units and the player you can easily do hundreds before worrying about the CPU usage so long as you keep them on a separate layer.

    edit: you can also have the deer listening collider decrease when it is in range of a third noise, such as a waterfall.
     
    BIGTIMEMASTER likes this.
  13. EternalAmbiguity

    EternalAmbiguity

    Joined:
    Dec 27, 2014
    Posts:
    3,134
    Heh. Something I've been working on because my main project crashes whenever I start it:
    npc_detection.png

    The green wireframe is a Probuilder cone, with a trigger MeshCollider and OnTriggerEnter/OnTriggerExit events for objects with a given tag (could make it a component, name, whatever). And raycasts when specifically looking for an item, which only happens on demand or in OnGizmosDrawSelected (as you see here, where objects in view have a wire sphere and a line to them). Currently there's not a combat context or predator/prey mechanics so it's not called frequently, but I've got it working on 100 NPCs alongside other sim-like AI behaviors (all unoptimized) with a framerate around 150.

    For herd-type animals you might consider a system that groups animals close together, and makes sure to only attempt perceiving a given object one time (so looping through the animals, and once the raycast is successful, break and not do it for the rest).
     
    BIGTIMEMASTER likes this.
  14. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    that sounds reasonable to me
    adds to notes*

    yeah that is good idea. So if one part of the herd finds you, presumably they all are gonna start running, so no need for everyone to keep firing.

    not something i would have thought of right away. thanks!
     
    EternalAmbiguity likes this.
  15. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    14,962
    You'll find loads of stuff like this where "inverting" the problem helps you to come up with a much more efficient solution. For instance, starting a visual search for a specific object from the animal's eyes is typically far more expensive than starting from the target object.

    For things which react in herds anyway, I'd consider running at least parts of the AI at a herd level rather than an individual level in general. This does add a layer of complexity, but can make things far more efficient. A couple of examples.

    A herd of deer decides to move from one grazing area to another. You could either communicate that decision to each individual animal and have them each individually pathfind to the new area, and then maintain / update those paths as they go. Or, you could have a single entity which represents the herd (which could be a 'leader' member within it) do the main pathfinding for the long distance, and each member of the herd does a much simpler "follow" behaviour. For certain types of animals / environments you could even re-use the same path with offsets, or look into a "flow field" (a grid which leads to a destination which can be re-used by every entity with a similar destination).

    The same herd of deer is watching out for predators. You could individually do the danger detection discussed above, but even the "efficient" version has a significant cost if it's done individually for each member of a large herd. So instead you can have a herd-level AI which picks some most-likely candidates and does the detection just based on them (as opposed to looping through all of them - though that could add a desirable 'randomness').

    With stuff like this you can scale up your herds 10x or even 100x with only relatively small increases in required AI effort.
     
  16. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    awesome!

    I will probably try some of this out but it will be much later. I still have a couple months to finish my current project before I properly start on this idea. So far I think it should be feasible though - in just three days I've got the basics of the animal AI pretty well blocked out, and that was the only major system I didn't have any experience with.

    Anyway, herds would be an icing on the cake sort of thing. But i think it would be fun both in the game and to code so I'll probably give it a shot. If nothing else, just seeing a big caribou herd running across windswept tundra would be pretty dramatic scene for a trailer.
     
    angrypenguin likes this.
  17. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    142
    I'm a bit late, but when doing the raycast, it's a good thing to randomize the point on the target and not just the center of mass, in case the single checked point don't get occluded by spiky geometry while the full body is well visible, or the player holding a crate in front of the checked point and using that to get close, which creates silly situations.

    (ex: using the bone position + random offset at a distance less than the collider size, if compounded collider shapes, else in case of a big collider, the bone position + the jitter offset should be less).

    I also like to add "reaction time", as in not just a single check alert, but it fills a gauge before detection is certified, the gauge decrease over time when not detected. Possibly having a check around animation (just a look at in general) while in "reaction" phase.
     
    BIGTIMEMASTER likes this.
  18. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    That's a good idea - I hadn't considered that. Or maybe you might shoot like 3-5 raycast, sort of like a shotgun, and require that a certain percentage are positive hits?

    I imagine I'll end up doing something like that after some playtesting if it seems like the animals have unrealistic perception.
     
  19. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    142
    Yeah people do that too!
    But since living being have reaction time anyway, delayed additive detection isn't a problem. Human have reaction time from:

    - 112ms (limit of physiology)
    - to 2000ms (altered perception),
    - generally untrained people average at 1000ms,
    - but with habit people hover around 800ms and 340ms.

    All of these data are more than 33ms of 1 frame at 30fps, that is respectively:

    - 3 frames
    - 60 frames
    - 30 frames
    - 24 frames and 10 frames

    I think it's usual (as per AI post mortem) to have AI tick at 5fps in commercial game (every 165ms). So it make sense to have delay.

    In the end, whatever work best for your use case. Since you are using raycast infrequently (firing only when there is an overlap with a sphere) the immediate 5 random shots is probably the best idea. Also to the question which fastest, well there is a saying, if the first bottle isn't full, you don't need to look for a second one, and it depend on your physics system (i'm assuming it's unity's Physx), since they handle that, as usual in case of problem, profile first.

    Technically human can have zero and negative response time due to prediction of expected pattern, but that's just me geeking and it's not useful for here. As usual this answer was just an excuse to geek, sorry for the massive unnecessary dump.
     
    angrypenguin and BIGTIMEMASTER like this.
  20. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    14,962
    Definitely agree with spreading stuff out over multiple frames where it's reasonable to do so.

    Reaction time is a thing. It also helps players to feel that stuff isn't "sudden", especially if you give them cues as to what's happening. E.g. the character starts looking towards you as soon as one hits, but doesn't mechanically react until the tolerance threshold is hit. Or loads of games show an icon / play a sound - silly, but popular.

    Highly unlikely to matter in the cases described, but another benefit is that if multiple things happen at once you don't get all of the computation hit in one frame. For heavy stuff I'd consider queuing it, so that whether there's 1 or 50 the performance impact is controllable and predictable. And back to game feel, the queue spreads the reactions out over time, which can feel a little more natural in some circumstances.
     
    BIGTIMEMASTER likes this.
  21. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    queue means that you need to encapsulate the actions into an object so that you put them into an array, then execute in order one at a time?
     
  22. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    14,962
    That'd be one way to do it. I'd probably encapsulate the request rather than the whole action. For instance, for a visual detection queue I might have a Queue<KeyValuePair<Detector, Target>>.

    Edit: I mean a Queue, of course, hah.
     
    Last edited: Aug 10, 2022
    BIGTIMEMASTER likes this.
  23. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    reason for that is just efficieny? A request can be a tiny bit of code, whereas the action may have a lot more involved, including soft references to other classes (increasing footprint?)?

    so if the class that defines an action can stay in same place, it's less work for computer than if you were moving the "object" from one place to another?
     
  24. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    142
    I more layman term:
    - every detection is added to list
    - you process x number of elements of that list per frame, once processed you remove them from the list.

    Since elements are added to the bottom of the list, and you probably process them from the top, you get an easy way to manage the load, for ex by monitoring the list length in case the load grew faster than the process, ie easy profiling, or just tie the number of element processed to the clock and do as many as possible.

    Now that's effectively one way to implement a messaging system, a queue is just a list with extra characteristic, ie it enforced only pushing at the bottom and poping element from the top, ie first in first out.

    What you put in the list and where you put it is up to you, as long as it allows you to query what you need. I find it easier for me for stuff to be explain, in concrete implementation using simple data structure, because I remember how it was hard for me when things were explained using high level concept like messaging, which while I understood them, had so many nuances and possible implementation I got lost in analysis paralysis. It is something that smart experienced people tend to forget when explaining. Don't overthink performance issue if you dont have them yet.

    Or maybe i read everything above wrong, lol, it's getting late. :p
     
  25. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837
    Sounds to me like you are describing same thing as AngryPenguin.

    What you are saying about managing the list so that it's first-in-first-out reminds me of when I was in high school and I worked at a fast food restaurant. When you stock the freezer, you got to rotate the stock so that the oldest stuff gets used first. Same principle.
    I think almost any professional programmer would think that's an idiotic and probably problematic way to describe something like a command queue but to me that immediately makes everything perfectly understandable, whereas if I just look at some text it's going to take me 10 minutes before I can translate what I'm seeing into a real-world relatable example like I described.

    I agree it can be difficult sometimes to figure out what people are talking about if its imprecise language and/or general principles instead of technical implementation. Especially if there is a language or cultural difference.

    On the other hand, it seems to me that the lower level of detail you discuss, the more peoples' idiosyncrasies come into play. But at big picture level - talking strategies and principles, it seems like there is a few well-known strategies that are proven to work, and many common mistakes to avoid.

    I think there is different ways people organize data in their brain. I'm probably not the ideal type for programming - I just do it because I can't afford to hire help.

    Luckily for me there is a few heroic helpers who can understand my non-technical babble and figure out what I'm trying to ask :)
     
    GimmyDev likes this.
  26. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    14,962
    Simplicity, more than anything. It's just a list of the callees / parameters for the exact same method call you'd be making otherwise.

    If a request needs more than that then wrapping it in an object might make sense. But in this case it's literally just "call the method on this object when it gets to the front of the queue'. No need to make it any more complicated than that, unless there's really a need.
     
    BIGTIMEMASTER likes this.
  27. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    4,837

    ah gotcha, i misunderstood. I don't have the vocabulary, so I thought "request" implied that it was another object.