Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

C# - Integers or Bytes?

Discussion in 'Scripting' started by Kowbell, May 22, 2014.

  1. Kowbell

    Kowbell

    Joined:
    Apr 13, 2014
    Posts:
    3
    Hello, world!

    I have a question that has been fairly unanimously answered by Google searches that I'd like to confirm for Unity, as I don't know whether or not it might be different. I am pretty new to programming, having done some Python, and decided as of late to take up C# and Unity to try my hand at game programming, which I aspired to do as a younger entity.
    Going through the C# documentation on integral types, I noticed that byte values are only 8-bits, whereas int values are 32-bits. I was curious whether or not it would therefore be more efficient - even if only on an extremely microscopic scale that wouldn't even be noticeable in the long run - to use bytes instead of ints for any values under between 0 and 255. After all, that is less memory...
    According to the glorious Google machine (and several consequential tabs from StackOverflow), it appears that, given the 32-bit design of processors (and I would suppose Unity would be similar as it is only 32-bit...for now), integer values are in fact more efficient (again, only by extremely minuscule amounts). Would this hold true for Unity?
    And yes, as I have mentioned before, I do understand that if one was more efficient, it would be negligible. This is more of a curiosity thing, and because I'm starting to use bytes more than ints because I consciously feel more efficient and legitimate - which brings me to my next question of, is it bad form for me to use bytes instead of ints?

    EDIT: I guess I forgot to mention, but the instances in which I was using bytes instead of ints, the value it represented would never exceed 255, nor would it be negative.
     
    Last edited: May 22, 2014
  2. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,398
    The best thing is to run a benchmark and see for yourself. It's going to vary somewhat from CPU to CPU anyway. Unity does publish 64-bit builds right now, by the way, so 64-bit is relevant. My recommendation is not to use byte unless it's for a good specific reason, and not "this one variable will take 1 byte instead of 4, whee!" But like I said, run a benchmark, or use the profiler if you have Pro.

    --Eric
     
  3. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    If you are going to use bytes instead of ints, I suggest you have a good reason. Three reasons I can think off of the top of my head:


    • You want your datatype to more closely model the expected behaviour - values below 0 or above 255 are incorrect so why support them?
      • You greatly increase the risk of byte/int overflows impacting your work, and this range is fixed (e.g. cannot set range to 3 to 152) perhaps a better solution is using a property to ensure range is respected.
    • The code you are working with already uses bytes for some reason, and you wish to maintain compatibility. This will usually be due to the following...
    • You want to a compact representation. I use bytes all the time when I'm writing networking code, working with very large arrays, working with images etc. because they are space efficient for the range they hold.
     
  4. GarthSmith

    GarthSmith

    Joined:
    Apr 26, 2012
    Posts:
    1,240
    Unless you're working with a lot of network code or writing hundreds of thousands of these things to disk, I feel that using byte instead of int is just asking for more bugs and a longer development time.

    I've used byte for things like encryption where a key needs to be a specific amount of bits or the encrypted data is really just a bunch of bits, not a numeric integer.

    Otherwise, you lose a lot of the benefits of int, which means longer development time, which means less time optimizing in places where it might be more time-efficient, and more time bug fixing.
     
    Last edited: May 22, 2014
  5. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,377
    they are technically more efficient on memory, just due to size

    processing them on the other hand, little to no efficiency over int really.

    And as you said, that memory efficiency... it's negligible. So why do it? Because you "feel more efficient and legitimate".

    More legitimate?

    Like a legitimate programmer?

    Yeah... no... that's not how programmers think at all. We think in a manner of "what best represents our data in a reasonably efficient way and with low risk of error".

    Byte has its use, and its use is where the data we want to represent is best represented by a byte. NOT because a byte takes up slightly smaller amounts of memory.

    Lets take for instance I wanted to have a struct that stored color values. Well... I might have 4 fields all typed byte (A, R, G, B) making my Color struct take up only 4-bytes (32-bits) total. Which is perfectly fine because 32-bit colors are stored in the precisely same exactly numeric range. It best represents our data with minimal risk of error and yes, it's more compact in the end.
     
  6. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,398
    It's more efficient to process ints than bytes.

    --Eric
     
  7. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,377
    Technically yes, it's a bit more complicated than I wanted to get into. But yes, if the byte needs to be fit to the word size... definitely. Other times, maybe not.

    Almost always, it's never more efficient in the CPU.

    It's often just as efficient.

    And sometimes less efficient.

    Of course Mono might also have other oddities I am not aware of (I'm more familiar with .Net).
     
  8. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,398
    It's always more efficient in the CPU to fetch an int instead of a byte (talking about any CPU which is even vaguely modern), since the CPU fetches words from RAM. So if you're just getting a byte it still has to retrieve the word and then separate out the single byte, which is an extra step. The same goes for writing, where for a byte it still stores 32 bits but has to mask out the unused parts. Performing actual calculations is usually the same for both as far as I know, but since you will typically have to fetch and store values in RAM as part of the process, that part can't be ignored.

    There is the possibility, however, that if you have a decent amount of data such as an array, using byte for all the data might fit in the CPU cache and int, being 4 times larger, might not, which would negatively affect speed.

    --Eric
     
  9. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,377
    oh really, so you basically just said:

    sometimes its faster, sometimes its slower, depending WHAT you're doing

    What I just said.
     
  10. makeshiftwings

    makeshiftwings

    Joined:
    May 28, 2011
    Posts:
    3,350
    As Eric and Lord said, it depends on what you mean by "efficient". If you want to minimize memory footprint or size for network transfer, then packing bytes is better. If you want maximum speed for most operations, then ints are better.