Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Discussion for loop - diminishing index

Discussion in 'Scripting' started by wideeyenow_unity, Aug 15, 2023.

  1. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    I had a List that added all available to it, without parameters, because of the way my map loads. So a cheap fix I thought to just iterate through the list, and remove any indexes that weren't in new parameters.

    I remember talking about this before, ages ago, and someone told me to just invert the "for loop" so that it counts down from list.Count and that would fix any issues with skipping over indexes.

    But before I wrote that out I decided to test this:
    Code (CSharp):
    1. void FixAdjacent()
    2.     {
    3.         for (int i = 0; i < adjacentHex.Count;)
    4.         {
    5.             if (adjacentHex[i].myType != TypeOfHex.Land)
    6.             {
    7.                 adjacentHex.RemoveAt(i);
    8.             }
    9.             else { i++; }
    10.         }
    11.         fixedList = true;
    12.     }
    And through all my testing, this seems as though it works just fine. Which is weird, because I thought for loops could only be setup the standard way.

    So just to double check, is this method ok to use? Or is it possible it can glitch out? Or is there any opinions of a possible better way?
     
  2. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    5,769
    The common method for wanting to remove elements from a list while looping over it is to loop over it backwards. Though you have found an interesting alternate way to do so.
     
    wideeyenow_unity likes this.
  3. halley

    halley

    Joined:
    Aug 26, 2013
    Posts:
    1,834
    In professional coding, there's a thing called a "code smell." It may work, but something doesn't feel quite right. If I see a
    for (<init>; <condition>; <step>)
    block with an empty <step>, that triggers my code smell senses.

    The "work backwards" approach not only works reliably, but it's a well-known approach so people have immediate confidence in it. Your approach may work, but it will take anyone unfamiliar with the code to work out in their head whether it's good or not.

    And the right way to answer "is it possible it can glitch out" would be to have a set of test cases you can run and see if it breaks or not. Zero elements? One element that is good? One element that is bad? All elements good? All elements bad? Are all the bad elements removed in all cases? Are all the good elements kept in all cases?
     
  4. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    That's what hangs over my head, but like most things in Unity they have overloads to handle less/more parameters in things.. huh, I don't know, it's bugging me.. I'll just look for source code on for loops tomorrow. :)
     
  5. CodeSmile

    CodeSmile

    Joined:
    Apr 10, 2014
    Posts:
    3,899
    The code smell for me is the if condition. Nobody expects an if condition inside the for loop to affect how the loop progresses. You may be tempted to add a more complex conditional or an else if clause and eventually, one of these conditions do not actually remove an item. That‘s when it‘ll fail. And because that conditions is possibly a rare occurrence you have a bug that rarely happens (possibly only in builds) and may be hard to explain.

    Reverse iteration prevents this altogether. Alternatively, to make this clearer, I simply wouldn‘t use a for loop but a while loop.
     
  6. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,084
    After reading @CodeSmile's comment I asked GPT-4 if it would remove the
    if
    statement, and it simplified the code quite a bit more than I was expecting. I'm about to head to bed so this code hasn't been tested at all.
    Code (csharp):
    1. void FixAdjacent()
    2. {
    3.     adjacentHex.RemoveAll(hex => hex.myType != TypeOfHex.Land);
    4.     fixedList = true;
    5. }
     
  7. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,495
    Also when you remove many elements, the absolute preformance would be better since removing elements at the end / close to the end is faster. Iterating forwards would essentially maximize the overhead since at every remove call you would move all remaining elements down one place. In the average case it probably doesn't make a huge difference.

    There are other methods to avoid unnecessary copying. If the order of the remaining elements should be preserved, you can use RemoveAll. If the order doesn't matter, you can replace the removed elements with the last element of the list and remove the last element which is O(1). Of course this will kinda shuffle the elements around. So this only works when the order doesn't matter. Though this approach lets you remove arbitrary elements in O(1). Though this again requires you to iterate backwards or to repeat the current index when you go forward. So for such an algorithm a while loop is usually more appropriate.
     
  8. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,043
    I concur with both Halley (it's a code smell) and Bunny83 (it should be a while loop).
    Essentially you either slam a hefty comment next to that, or you stick to the tried patterns.

    Your litmus test is the following: When reading this in 3 years will you be like "huh?" also how easy it is to fix or check by sight only? If you can tell there's trouble ahead (and you kind of can, hence this topic), just don't do it.
     
    wideeyenow_unity likes this.
  9. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    freakin' AI man... the threat is becoming even more real! lol, most instances I've seen AI come up with different or weird looking methods, they wind up being more performant. So I'll have to benchmark all suggestions just to see, and probably would stick to which ever method is less ticks. Thanks for the tip!

    This is true, I just started the project the other day, so still finalizing the structure overall, so when it comes to more
    hex.Types
    ? it is possible I might add in a bunch. Good call!

    As Ryiah's post shows, that does intrigue me, as I never once thought to use RemoveAll(). But as CodeSmile brought up, there may be more within the parameters of filtering, so I don't think RemoveAll() would work(unless I made a separate bool function, which I do sometimes use). As far as shuffling the order, that's not too big an issue, as the adjacentList in this case is just for A* pathfinding, or adjacent "hexes" that offer functionality to the selected hex.

    LMAO.... Not even gonna comment on that.. but yes.. I completely understand what you mean, lol.

    I'm not too familiar with
    while
    loops, but I thought they continue iterating until the above check is met. So a little confused on how to say
    if (allHex.conditons == conditions) = stop iterating
    , but I'm sure that's just because of my lack of using and understanding
    while
    .

    To be honest, I was planning on re-doing the way the map loads and creates, to have better control overall. But I was in a hurry because it was getting late, just happen upon
    List(filtering)
    again, and the subject intrigued me. I would just like to have "second-hand" knowledge on the next time this becomes an issue, and not think, just do.

    So throwing cleanliness or smell out of the equation, I'll probably decide to go with lesser ticks(even if by 1, lol).
     
    Bunny83 likes this.
  10. CodeRonnie

    CodeRonnie

    Joined:
    Oct 2, 2015
    Posts:
    280
    I was going to say the same thing as Bunny, that you can go backwards and swap the current element with the last one before removing the last element, then decrementing the index. That method avoids calling Array.Copy internally each time an element is removed, but doesn't preserve order.

    Also, one technique I've developed to preserve order and avoid Array.Copy calls from removing items is to maintain two lists. As you iterate forward through the list, instead of removing undesired items, add items you want to keep to the temporary back buffer list. Then, clear the old list and swap their references at the end. This was much faster for me in my testing than removing in place, and it preserves the order. But, as with anything, don't take my word for it. Compare the performance for yourself to see what works best.
     
    wideeyenow_unity likes this.
  11. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,495
    When you process the whole list and may potentially remove many elements, you can actually do that inplace. All you need is some bookkeeping. This is essentially what RemoveAll does as well. So you simply use two indices, one of the element you're processing and one of the "last free slot". So when there's a gap because an element should be removed, you simply remember the last slot that can be overwritten and when you hit the next element that should be preserved, you simply copy it to that slot. That way you only iterate through the whole list once and you only move each element once.

    For example imagine this array and we want to remove every even number from the array. The actual process would look like this:
    Code (CSharp):
    1.  
    2. // 1  2  3  4  5  6  7  8  9  // 1. element
    3. // ab
    4. //
    5. // 1  2  3  4  5  6  7  8  9  // 2. element
    6. //    ab
    7. //
    8. // 1  2  3  4  5  6  7  8  9  // 3. element
    9. //    a  b
    10. //
    11. // 1  3  3  4  5  6  7  8  9  // close gap
    12. //       ab
    13. //
    14. // 1  3  3  4  5  6  7  8  9  // 4. element
    15. //       a  b
    16. //
    17. // 1  3  3  4  5  6  7  8  9  // 5. element
    18. //       a     b
    19. //
    20. // 1  3  5  4  5  6  7  8  9  // close gap
    21. //          a  b
    22. //
    23. // 1  3  5  4  5  6  7  8  9  // 6. element
    24. //          a     b
    25. //
    26. // 1  3  5  4  5  6  7  8  9  // 7. element
    27. //          a        b
    28. //
    29. // 1  3  5  7  5  6  7  8  9  // close gap
    30. //             a     b
    31. //
    32. // 1  3  5  7  5  6  7  8  9  // 8. element
    33. //             a        b
    34. //
    35. // 1  3  5  7  5  6  7  8  9  // 9. element
    36. //             a           b
    37. //
    38. // 1  3  5  7  9  6  7  8  9  // close gap
    39. //                a        b
    40. //
    41. // 1  3  5  7  9  -  -  -  -  // clear remainder (a towards the end / capacity)
    42. //                a
    43. //
    44. // 1  3  5  7  9              // set count to a
    45.  
    "a" and "b" are simply indices. When an element should be preserved, it would be copied from b to a once there was an element that should be removed. So "a" simply tracks the last "free slot" while "b" represents our iteration index. When we copy an element we of course increase a after the copying but if an element should be removed, we keep that index so the next element that should be preserved would get moved to the next free slot in the sequence. Once b reaches the end, all the remaining slots at and after "a" can be removed all at once. For a List that means filling that area with default values and setting the internal size to "a".


    ps: If you really have time critical loops over large lists, it could be more performant to implement the List logic yourself with an array. A List will always make sure that "empty" slots of the capacity are always cleared / set to default. That means reference types get set to null so the actual references are removed. For value types it's kinda unnecessary as keeping nonsense artifacts in the capacity slots does not hurt. Depending on the application even dangling references in the capacity region might not hurt. They get overwritten when those slots are used again anyways. The List just does this clearing to enable garbage collection of those referenced objects. Though for most applications just using a List is much simpler ^^.
     
    Kurt-Dekker and CodeRonnie like this.
  12. wideeyenow_unity

    wideeyenow_unity

    Joined:
    Oct 7, 2020
    Posts:
    728
    Just my first benchmark test of my above "original post" method gave results of 1-20 ticks with even a max count of 6 list indexes. With my current setup I'm not going to be able to test further than 6. Nor am I going to be able to beat 1 tick... lol...

    I guess my worry of how badly the sifting through the list(s = 400), like that, really is nothing at all to worry about.

    I will eventually find a better way to test the difference between all the fore-mentioned methods, because not knowing is still haunting me.
     
  13. Owen-Reynolds

    Owen-Reynolds

    Joined:
    Feb 15, 2012
    Posts:
    1,913
    I feel as if I've seen it more often using a "removed" count which is also the slide-amount (and the other is the normal index). Code is like:
    Code (CSharp):
    1. int removedAmt=0;
    2. for(int i=0; i<size; i++) {
    3.   if(shouldRemove(A[i]) removedAmt++;
    4.   else A[i-removedAmt]=A[i]; // slide item down (if it slides 0 spaces we get A[i]=A[i] which is fine)
    5. }
    Of course the result is exactly as Bunnyxx describes. But the though process is "I've removed X items, so everyone needs to slide down X spaces to fill that gap".
     
    Last edited: Aug 16, 2023
    CodeRonnie likes this.
  14. CodeRonnie

    CodeRonnie

    Joined:
    Oct 2, 2015
    Posts:
    280
    Good stuff.
     
  15. Owen-Reynolds

    Owen-Reynolds

    Joined:
    Feb 15, 2012
    Posts:
    1,913
    Not really. OP asked about using a different type of loop, so got these answers. If they Googled "C# remove many items from list" they'd get the same "RemoveAll" that ChatGPT found. In fact, that Google result has to be the reason ChaG wrote what it did. I'd also like to think someone here would have written "use RemoveAll". And, as noted elsewhere, ChaG has so many wrong answers that it's RemoveAll solution would need to be looked-up, taking you to the same pages you'd have found from Google, but 3 minutes slower than if you Googled in the first place.
     
    Bunny83 and mopthrow like this.