Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Questions regarding NativeArrays, their allocation in caches and random memory access

Discussion in 'Entity Component System' started by crysicle, May 20, 2020.

  1. crysicle

    crysicle

    Joined:
    Oct 24, 2018
    Posts:
    95
    Hello, I have a few questions regarding NativeArrays.

    1. In this video, at ~7:45, the commentator describes how data layout looks like when using OOP design and then later DOD design. I was wondering, how does the cache look like when accessing a NativeArray? Suppose i have a NativeArray<int> the size of 1000000. If I only access the first index, will the whole array be loaded into cache memory or will that single index be loaded? What if you take a NativeSlice of a NativeArray and access it instead?

    2. What are the costs associated with accessing indexes of arrays in a non linear fashion?

    I made a test where I created 3 NativeArrays<int> of size 1000000. First stored some data, 2nd one stored pointers to the 1st array in a linear fashion ([0] -> 0, [1] -> 1, ...) and 3rd one stored pointers to the 1st array in a randomized fashion ([0] -> 532100, [1] - 1541, ...). I ran the test for each way of accessing the data for 10 mins each. The latency was higher by ~5% when using the randomized pointers, which is close to just being noise, making me think there's not much performance cost to this, though i didn't test hundreds of scenarios of arranging the data differently.
     
  2. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    3,983
    1) Neither. The cacheline (64 bytes on most platforms) containing the index will be loaded into cache.
    2) Expensive for each new cache line you load in, and also expensive if your old cache lines get evicted before you get to use them again.

    To elaborate more on (2), what a lot of people forget is that if you had a small array (say a couple thousand ints) and you random-accessed that array a million times in a row, what will happen is that whole array will end up sitting in cache and it becomes just as fast as linear access. So maximizing cache usage is about utilizing both spatial and temporal locality.
     
    apkdev, Cynicat and crysicle like this.
  3. crysicle

    crysicle

    Joined:
    Oct 24, 2018
    Posts:
    95
    Found a really nice talk regarding my questions, which explains the infrastructure of caches in layman's terms.