Search Unity

  1. We are migrating the Unity Forums to Unity Discussions by the end of July. Read our announcement for more information and let us know if you have any questions.
    Dismiss Notice
  2. Dismiss Notice

Question How to handle a lot of data?

Discussion in 'Cloud Code' started by kitkat514, Mar 18, 2024.

  1. kitkat514

    kitkat514

    Joined:
    Mar 18, 2024
    Posts:
    2
    Will there be any problems processing a lot of data through Cloud Code?

    For example, in a city-building game, you want to store the user's building data.

    There is building data with various information.

    Code (CSharp):
    1. public class BuildingData
    2. {
    3.     public int id;
    4.     public int x, y;
    5.     public int hp
    6.     public int level
    7.     public int upgrade
    8.     public Date rewardTime
    9. }

    There is player city data

    Code (CSharp):
    1. public class PlayerCityData
    2. {
    3.     public int string cityName;
    4.     public int cityLevel;
    5.     public Dictionary<int, BuildingData> buildingDatas = new Dictionary<int, BuildingData>();
    6. }

    There are about 200 buildingData in buildingDatas.

    PlayerCityData is stored in CloudSave

    When the client changes the building state, it calls CloudCode, retrieves PlayerCityData from CloudSave, changes the data, and saves it back to CloudSave.

    The question I have is, will there be any problem with frequently loading and saving a lot of data in CloudCode?
    Will there be any problems if requests come in from many players?
     
    Last edited: Mar 18, 2024
  2. samg-unity

    samg-unity

    Unity Technologies

    Joined:
    Mar 23, 2021
    Posts:
    47
    Hi @kitkat514

    Thanks for the questions;

    > will there be any problem with frequently loading and saving a lot of data in CloudCode?
    > Will there be any problems if requests come in from many players?

    Without seeing specifics this is a bit of a broad subject, but I would at least expect that you could potentially encounter problems stemming from network based requests (latency/ timeouts etc).
    If you are expecting to be writing to the same records frequently you may also have to look into conflict resolution, Cloud Save provides a write lock to ensure that you're only ever updating the expected state.

    We've tested our systems to ensure that we are able to handle a high throughput, however you should familiarise yourself with the current costs and limits when interacting with Cloud Code and other UGS services.
    To mitigate unexpected cost or limitations I'd also suggest thinking about how to potentially reduce requests through the use of command batching or taking advantage of some of the service batching methods:
    I hope this helps!
     
    Marvin_Wack and kitkat514 like this.
  3. kitkat514

    kitkat514

    Joined:
    Mar 18, 2024
    Posts:
    2
    Thank you for your reply

    I was just curious whether the method of processing large amounts of data given as an example was a normal processing method.

    I will refer to the link. Thank you!
     
    samg-unity likes this.
  4. samg-unity

    samg-unity

    Unity Technologies

    Joined:
    Mar 23, 2021
    Posts:
    47
    No problem @kitkat514

    If I understand correctly then I think if you're planning on doing this in a single request/ invocation your main limitation will be the execution timeout of 15 seconds.

    Jobs/ background processing of data is something I hope we will one day implement in Cloud Code to better support these types of use cases so thanks for sharing your use case!
     
    kitkat514 likes this.
  5. GabKBelmonte

    GabKBelmonte

    Unity Technologies

    Joined:
    Dec 14, 2021
    Posts:
    154
    Hey!

    Besides the great replies from Sam, as an aside from my own dev experience, I recommend you use "Struct of arrays", instead of "Array of Structures"

    This will greatly reduce your payload, size and will just help overall with perf. You'll need to reconstruct in the client side though.

    Instead of
    Code (CSharp):
    1. public class BuildingData
    2. {
    3.     public int id;
    4.     public int x, y;
    5.     public int hp
    6.     public int level
    7.     public int upgrade
    8.     public Date rewardTime
    9. }
    10.  
    11. public class PlayerCityData
    12. {
    13.     public int string cityName;
    14.     public int cityLevel;
    15.     public Dictionary<int, BuildingData> buildingDatas = [URL='http://www.google.com/search?q=new+msdn.microsoft.com']new[/URL] Dictionary<int, BuildingData>();
    16. }
    17.  
    Do

    Code (CSharp):
    1.  
    2.  
    3. public class PlayerCityDataPayload
    4. {
    5.     public int string cityName;
    6.     public int cityLevel;
    7.     int[] Ids;
    8.     int[] xs;
    9.     int[] hp;
    10.     //etc
    11. }
    12.  

    With this format, instead of specifying a property once per object, you will specify once for all objects.

    For instance, if you have 1000 BuildingData, each payload will have 1000 times the "hp" and the "Date" property declared.
    With "Structure of Arrays", it will only appear once.

    On the client side, you can then rewind the data into structures that make sense for your game.

    More reading if you're interested: https://en.wikipedia.org/wiki/AoS_and_SoA
     
    KM6611 likes this.
  6. augustine_unity800

    augustine_unity800

    Joined:
    Apr 28, 2021
    Posts:
    5
    I have a similar issue in a "Tiered Leaderboard" structure I'm trying to implement. The idea is there are 3 tiers and each week X amount of players are promoted or demoted. I also want players to compete within small buckets, of like 10 players.

    The most straightforward approach would be to do a batch process of all the leaderboard buckets and save that data into the player's Cloud Save data.

    However, I am heavily restricted by the rate limits imposed on the Leaderboard Admin REST API and Cloud Save REST API. I am deploying this code in a C# Cloud Module.

    For example, with buckets of 10 players, I can at most support 1500 players, since reading the scores for each bucket in the leaderboards would entail 150 "Get Bucket Score" requests. For the Leaderboard Admin API:

    Similarly, the Cloud Save API also has a rate limitation:

    This means my function would definitely be killed after the 15 second timeout :(.

    Is it possible to have deployed Cloud Code or service accounts bypass these REST API limits? Or will new endpoints be available to handle these larger batch operations? Or is it the intended design of UGS to off-load most of this processing towards client-driven requests (e.g. in my use case, the player clients are individually responsible for determining their weekly tier results)?

    Thanks for taking the time, this would greatly help how I design my backend in the future.

     
  7. samg-unity

    samg-unity

    Unity Technologies

    Joined:
    Mar 23, 2021
    Posts:
    47
    Hi @augustine_unity800,

    > Is it possible to have deployed Cloud Code or service accounts bypass these REST API limits

    No these rate limits are imposed by the Cloud Save and Leaderboards services. The service accounts have a higher rate limit than players but they should be sufficient for most use cases.

    For your use case do you require a batch process of all the leaderboard buckets at the same time?

    As I mentioned in the previous post, Cloud Code is not really suitable for large long running processing of data at this time, but if each leaderboard can be processed individually (e.g. triggering Cloud Code on the leaderboard reset event) or you can store the results of the processing in Cloud Save to be later processed by another script then maybe that could help break down the problem size.
     
  8. augustine_unity800

    augustine_unity800

    Joined:
    Apr 28, 2021
    Posts:
    5
    That makes sense. Thank you for the reply. I do believe there are other ways to achieve my desired feature. Since this Cloud Code seems quite similar to previous serverless solutions I've worked with (i.e. Firebase), I was mostly curious to see if the functionality was the same. Seems a little different, thanks.