I am trying to visualize real-world prediction data. I have ~160 objects, and for each of them, I have their positions for every minute from now until the next 3 hours. I read these positions from a rather large text file via an HTTP request. Currently, I have a system that will read this data every hour. After it parses all the data, it adds it to a dictionary in my movement system. The dictionary key is an ID associated with the game object to a Queue<PositionData>, containing 180 fairly simple PositionData structs. I also have a 2nd dictionary that maps the game object ID to the current PositionData struct. My movement system calls Entities.ForEach and loops through all of my objects, stepping closer to the next data point. Once the minute is up and the next data point is reached, the movement system dequeues the next position data item and puts it the 2nd dictionary containing the current data. Outside of interacting with those dictionaries and queue, all I'm doing is updating the transforms and rotations by adding the pre-cacluated step sizes. My game runs SSLLOOWW. Much slower than other ECS demos and benchmarks I've used. Am I doing something horribly wrong by storing so much data in the movement system? Should I be storing all of those values in the PositionData component? Or in another component that represents the queue/future values? Or maybe an external store like redis or something? I really have no idea what to do with all this extra data for the future positions, if that is the cause of my problems.