Im experimenting with giving eveolving creatures ai using ML agents. i have a basic system they can generate modular creatures out of preset body parts in 2d and can even loosley simulate eveolution with pseudo machine learning, by simply having creatures with poor body configurations live shorter lives and reproduce less. But im having trouble combining that with ml agents and am unsure of the possible scope. i've peeled it bakc to a single static creature that needs to collect randomly spawned food to experiment with MLA. What information exactly does observations take from different types. I know it takes floats from base types like ints, vectors, transforms and the like. But what about a game object? is it just grabbing the transforms numbers from it? or is taking labels and component data too etc..? do you of any resources on handling dynamic behaviours? for example where the agent may have different actions available to them at different times (eg.. switching weapons, or going blind) or even a variable amount of allies/enemies/ food objects. or in my case instances where it may not have eyes or its mouth might be in a different spot of it's body, extra fins etc... is the grid sensor a good way to detect and differentiate hazards,buffs and enemies? Similar to the food collector example? im studying the gridsensorcomponent script and dont really understand how it works. what information is it collecting? https://github.com/mbaske/grid-sensor i just found this while writing, so it might answer this question. How does the agent contextualize observations. if it's just fed 3 out of context floats, how does it eventually comprehend that that is it's coordinates? further if you feed it a list transforms for food pellets to collect, does it just eventually work out what each is through iteration and ML? or do i need to be assigning context to these besides rewards?