Search Unity

How does Unity handle texture memory?

Discussion in 'Editor & General Support' started by Zergling103, Apr 2, 2013.

  1. Zergling103

    Zergling103

    Joined:
    Aug 16, 2011
    Posts:
    392
    Hi there,

    I was wondering how Unity works with Texture memory. When it loads a texture, what is the lifecycle of that texture between when it is loaded, when it is used for rendering and when it is deallocated?

    After looking at our project through the new Memory Profiler in 4, we realized that Texture2D is using 400-500 MB (the other culprit being the Asset Database, which used another 400-500 MB). It -seems- as though Unity does not free texture memory at all even if the texture is not being used by a renderer.
    Though our project is a special case in that we do not use level loading - seeing as how our project is completely procedural based, there is no need to load; objects are destroyed and recreated but may use different textures or models.

    In Unreal 3+, if I'm not mistaken, textures which are not being rendered are released from memory almost immediately. When they are requested again, Unreal will load in the lowest LOD mip-map first, then load in higher mip-maps over many frames depending on texture size, so you don't get hiccups in frame rate.
    Does Unity do anything like this? Does it keep textures in memory indefinitely until it runs out, replacing the "oldest" textures with new textures? If this fails, say because too many textures are being rendered at once, does Unity simply crash or render the object with a blank white/grey/black/flat texture?
    (I hope that Unity gets something like this soon if they don't, especially how Unreal 3 seems to have it already...)

    Thanks to anyone who may have some useful insight on this.

    -Steve
     
    Last edited: Apr 3, 2013
    Agent0023 likes this.
  2. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    Fortunately, no, since that's one of the more obnoxious "features" of the Unreal 3 engine. IMO, seeing pixelated textures progressively loading in is a poor user experience, regardless of framerate hiccups. Texture memory is virtualized, so I don't think it can ever "fail" per se; you'd just have worse performance if it has to fetch textures from main memory.

    --Eric
     
  3. Zergling103

    Zergling103

    Joined:
    Aug 16, 2011
    Posts:
    392
    What do you mean by Main Memory? As opposed to what other memory?

    I presume by main memory you mean RAM, as opposed to texture memory on your graphics card.
    Does Unity load deallocate textures from RAM, using the hard drive for virtualization?

    Currently it seems as though Unity is having problems with RAM, which is why I was wondering if it could fail if the renderers demand more texture memory than what is available. For example, say I have 1000 cubes each with their own unique 2048x2048 texture - an obvious worst-case scenario. You probably wouldn't have enough memory to have all of their textures displayed at once, yet they're all requesting to be displayed. How would Unity handle this?

    P.S. Thanks for saying "IMO". I think their progressive mipmap feature is pretty cool, personally. ;)
     
    Last edited: Apr 3, 2013
    Agent0023 likes this.
  4. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    As opposed to VRAM.

    By virtualization I mean that textures can be stored in main memory (at a performance cost) rather than VRAM. It's nothing to do with Unity, but rather OpenGL (and I assume Direct3D).

    --Eric
     
    Agent0023 likes this.
  5. Zergling103

    Zergling103

    Joined:
    Aug 16, 2011
    Posts:
    392
    Right, but you're not entirely answering my question. I'm not concerned with the affairs between RAM and VRAM so much as what Unity loads on the RAM itself, when it unloads things from RAM, and when it loads things onto it from the hard drive. Our game is running out of RAM, which is the killer, when it should be loading things from the hard drive and deallocating as needed.

    Hence, this is why I'd prefer to have the progressive mip-map loading; having lower-priority textures (perhaps calculated by renderer distance and time since last used in rendering) dynamically cut down in resolution or removed altogether is a better alternative to simply having the game crash because it's trying to load too much at once.
     
    Last edited: Apr 3, 2013
  6. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yeah, I'm pretty much the polar opposite of you because I'd rather have that, than have low res textures throughout the entire level. I'm talking ambitious here: think of wide scale 3D game on mobile. You have to budget really hard on mobile and this would be a god send.

    The mipmapping as textures appear is simply a result of poor hinting. You can hint to the engine what you are expecting to use next so it's paged in beforehand.
     
  7. Zergling103

    Zergling103

    Joined:
    Aug 16, 2011
    Posts:
    392
    I agree with hippo.

    The progressive loading isn't meant to be seen as a feature by the end-user - in any case where it is visible to the player I presume it is the fault of the developers using it.

    A system like this, if implemented in Unity, would be a great opportunity for developers to simply assign priority values / loading behaviours to textures, which Unity would then use to decide which mip-map levels should be loaded, kept or discarded.

    For example, for each texture you could calculate it's on-screen visibility according to the renderers which reference it. Scale could be estimated - so that objects further out don't necessarily have their highest-resolution mip-map level loaded - via the distance the camera is from the bounding box, though this algorithm could be a lot more involved.

    The point of this wouldn't be to introduce artifacts to the player, but rather to act as a middle-ground between having the texture fully loaded, and having the texture removed from memory outright when the game thinks it will no longer be used. It'd reduce or eliminate hiccups in frame rate during texture loads, and it'd allow textures to adaptively size themselves depending on the available resources, rather than simply crashing - as it does now.
     
    Last edited: Apr 3, 2013
  8. stevesan

    stevesan

    Joined:
    Aug 14, 2011
    Posts:
    65
    I'm also quite interested this - also working on a procedural continuous world game :)

    AFAIK, with VRAM virtualization, I think it becomes pretty difficult to detect when you're "out of texture memory". You basically have to guess at how much the user has and try to maintain that budget on your own. At least, that's the conclusion I came to on previous games with different engines.

    My current plan is to basically do this on my own. I will estimate how much VRAM I have free using this:
    http://docs.unity3d.com/Documentation/ScriptReference/SystemInfo-graphicsMemorySize.html

    ...and basically try to keep track of all textures in my game, loading and unloading them, half and full res, as necessary. I plan on using AssetBundle to load textures asynchronously with LoadAsync, and free textures with Resources.UnloadAsset. However, I'm not 100% sure yet how I'll go about switching textures or figuring out which ones are "important." I suppose worst case scenario I'd have to iterate through all my game objects, or put special scripts on them.

    I'll hopefully let you guys know how this goes. You can follow me on @steverockan if you're interested.
     
  9. J_P_

    J_P_

    Joined:
    Jan 9, 2010
    Posts:
    1,027
    Curious about this as well. I'm working with a prototype so I wanted to avoid optimizing till later, but running into hiccups when moving the camera (after baking lightmaps). After you stop the camera, the fps shoots back up (until the camera is moved again). My gut says it's due to loading stuff in vram (looking at profiler, drops seem to correlate with texture memory going up). Profiler seems to be buggy in OSX mavericks though so not sure if everything is accurate. But once I look away from stuff, the texture memory drops. Look back at stuff, texture memory goes back up and framerate drops.

    Any way to tell it to just keep everything in the scene in vram at all times so it's not loading/unloading stuff all the time? That might not be ideal for huge scenes, but I shouldn't have a problem keeping this scene within my machine's limits.

    Attached is a screenshot from profiler (look at stuff, look away from stuff, 4 times).

    Yeah drawcalls rise as well - static batching doesn't seem to be working. Looking into that next. But if it was drawcalls causing my fps drops, I imagine my fps would stay low when I stare at it. But again, after I stop moving the camera the FPS shoots back up (and it's not some weird bug with my mouselook code -- you'd see that in the CPU profiler).

    Also one last thing, why are the framerate drops not being represented in the CPU graph? Is it because the dips are unrelated to CPU? I'm mostly used to using the profiler with iOS and the CPU ms/FPS measurement seemed accurate there. Is it different in OSX? GPU profiler seems to be broken in Mavericks :(

    edit: See second attachment -- is this working as intended? Meshes values match memory graphs (what I'd expect). Textures values don't change despite huge difference in graph. That doesn't seem right to me.

    edit2: ah, static batching wasn't working as expected because of the lightmapping -- hadn't considered the effect that would have on drawcalls.
     

    Attached Files:

    Last edited: Dec 31, 2013
  10. Gua

    Gua

    Joined:
    Oct 29, 2012
    Posts:
    455
    Last edited: Nov 28, 2018