Which of these elements are good to create background elements for a (mobile) game when models would be too expensive on the drawcalls?
Best way to find out is to test it, but simple sprites are just quads. Potential benefits are that you could have multiple sprites which could be used to create a more interesting backdrop in theory using less texture memory than one big quad. Also a sprites shape can more closely map the transparent areas of the sprite producing more triangles but less alpha which is expensive on mobile.
Yep, Unity's sprite system is basically quads+texture atlases. Set up your images to the same atlas and save drawcalls. It's probably an excellent solution for mobile
Sprites are faster than game objects, a bit. There is less overhead processing the 2d transform versus 3d. Fill rate is the same, if using the same shader.
Not sure that is entirely accurate, as when you actually use a sprite on stage it is a GameObject (sprite renderer). Also the transform used is sill a Vector3 since z is taken into account for ordering. GameObject and Transform overhead is still there. There may be performance optimizations for a sprite renderer over a standard renderer when used with a ortho camera/layers. Not sure on that, but I would imagine that is where any performance differences would lie (and in textures).
Sprites are game objects. Specifically, game objects with a sprite renderer component, and they use a 3D transform. They are standard objects that can be moved and rotated fully in 3D space like any other objects. --Eric
Use sprites. Unity has a toolset built around them. Working with quads means writing your own toolset or buying one. - John A
If it's a repeating scrolling background then quads as you can animate the UV coordinates. But hopefully UT will update sprites so you can access it's UV coords.
If your concern is performance, I think quads are fine, at least for modern phones. I'm actually using polys in my game, with 10.. 20 triangles... and I don't think it as that much effect on performance, that I can notice at least. You should worry more about drawcalls than tricount. I do it this way, so I can cut out what I need from the texture, so I can fit more stuff in the same texture file. Just some WIP screenshots:
Thank you!! :3~~ And I'm pretty sure using polys is not that bad on performance... but I fear I have too many drawcalls... it's adviced not to go over 50, right now I have 60 in average... but it'll need background and foreground, enemies, etc. Luckily I still didn't apply drawcall minimizer or anything like that, I still can reduce a lot. I'm not too worried yet, so far performance is good in Galaxy S3 mini, but I get this weird lag the first 3 or 4 seconds, then it runs smooth.
wanted to add this comment because i repeatedly stumble across this thread in my search: As far as I (shader noob) investigated, geometry will perform better than quads for the sole reason of Fill Rate. If you use quads, most of your Sprite area is transparent. This transparent Pixels will cause alpha-overdraw on pixels essentially increasing PixelPerFrame and this hits hard on mobile platforms. source: http://forum.unity3d.com/threads/wh...ch-an-issue-on-mobile-dev.109463/#post-725378 I hope someone can enlighten me if this is not true anymore, i see a severe performance impact on android if i use the standard-sprite-shader to render my tiles that include "empty"-pixels: (http://forum.unity3d.com/threads/mobile-transparent-sprite-performance.334618/)
Good point ideally you would have the sprite's polygon shape map as closely as possible to it's outline to minimise the transparent pixels.
Automatically- Unity only does this well for convex shapes i guess, if i look at the wireframe of my sprites, the concave L-like structures will get a quad while clouds will get a mesh that depicts the outline more closely.
I haven't... how do I do that? :-0 I set objects inactive if they're far away from player. This helped performance, but I'm crowding the levels too much. Now I'm going to try splitting the levels and load in chunks, wasn't a problem with 1st and 2nd level, but 3rd level is too big.
Eric proposed this: http://forum.unity3d.com/threads/how-to-preload-textures-in-the-scene.252554/#post-1668576
Sorry to resurrect such an old topic but I'm curious about this with my scenario: I'm building a chunk tile based array that will fill the screen with tens of thousands of small black squares as a form of occluding view on objects(Like a fog of war). Should I fill my arrays with quads or sprites? Opinions seem to be mixed over Sprites vs Quads so I am still unsure.
Do you really need tens of thousands of black squares or would a single plane or quad work with a texture map? It's just with lots of meshes you will be hammering the GPU and possibly CPU to update them when a single texture or tiled set of textures could do the same job for less work.
What I'm planning is a high resolution fog of war and I've done some testing on a million quads and it is quite rough. I have experience with creating chunk stuff before from a study in marching squares so that is definitely getting implemented. I am theorizing use LOD style with chunking where chunks that have not yet been affected yet by the players exploration to just draw one single black quad instead of the 10,000 that would make it up otherwise. As always, I'm probably trying to reinvent the wheel so if anyone has tips on what I'm attempting with this Fog, I am all too happy to receive them.
Interesting idea, I assume your mean like the texture cuts into itself with an alpha or something. I am unsure how I would accomplish that during run time.
Something along this line? https://answers.unity.com/questions/9919/how-do-i-create-a-texture-dynamically-in-unity.html I'd edit the pixels relative to the players position?
I think I can see what I am doing with this now, I'll roll with it as it seems like the best option without a doubt. Cheers for the suggestion Arowx.
Sprites don't play nicely with shader graph (rewrote the animation pipeline over the game jam weekend to deal with the issue). So if you are going to do anything fancy with shaders, start with quads and work your way up from there.
So I've been playing around and I'm nearly there with the concept. I've found the cutout setting on a standard shader and it's actually quite amazing in that it is casts shadows from the texture. I've run into a bit of a bug which I can't quite seem to figure out on boundaries of the quad. Check the image you can see on the right small artifacts that show up sometimes depending on how my array is configured. They seem like they are maybe wrapping or something. The array I've setup has that every second pixel has it's alpha set to zero. To keep everything normalized, the resolution on the object is 10 for a 100 pixel grid. Code (CSharp): public class FogTexture : MonoBehaviour { public int a_Resolution; int width; int height; float rodent = 1; void Start() { width = height = a_Resolution; StartCoroutine("Otter"); } IEnumerator Otter() { for(int i = 0; i < 100000; i++) { Texture2D newTexture = new Texture2D(width, height); Color[] imageOneD = new Color[width * height]; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { if(x % 2 == 1) { rodent = 0; } else { rodent = 1; } newTexture.SetPixel(x, y, new Color(Random.Range(0f, 1f), Random.Range(0f, 1f), Random.Range(0f, 1f), rodent)); } } print(imageOneD.Length); newTexture.Apply(); GetComponent<Renderer>().material.mainTexture = newTexture; yield return new WaitForSeconds(0.2f); } } } Still exploring but some ideas I've had are to exclude the boundary pixels like a margin or dicking around with the UVs. I'm sure there's a fix though.
These are interesting: The first image has pixel 0 off, noticing that each pixel draws from the center of the grid. The second image is pixels 0 and 9 off. The whole corner chunked which means they are sharing some sort of wrapping relationship. There is also wrapping at the top at the adjacent corners. These pixels have not been assigned any values.
newTexture.wrapMode = TextureWrapMode.Clamp; This seems to resolve the wrapping. Stage clear!(I think) I am still oddly curious about this object I've created. It still seems to be creating some sort of mesh in order to display it's shape(Still need to test performance). That is the basis of 3D rendering aint it? An object that needs to be displayed has to have points of data defining it's boundaries, those orange lines aren't just free.