Hi guys. I got a curve ball I wanna throw out there and get an opinion on. If I grab my mesh. And I round it’s verts to X number of decimal places. And move the object to a X round decimal, Will the mesh be more optimal? This is to say. The smaller length of numbers will consume less memory while the mesh exists. thoughts?

Scalar variables like float / int / double take the same amount of memory regardless of what value they contain. This obviously extends to Vectors, which are simply arrays of scalars.

A single-precision floating point (float) is simply 4 contiguous bytes of memory. Same with a 32-bit integer (int)... just 4 bytes of memory. It's just that the bits "mean" different things depending on if it is an int or float. A 32-bit memory pointer is also ... wait for it... 4 contiguous bytes of memory! Generally an array of a primitive type will be tightly packed, one integer (or float) right after another.

So then yeah you should see a performance increase if not using values that 0.899999999 consume more bytes. Even on spaces such as Verts! Is this the answer?

No, go read above again. > using values that 0.899999999 consume more bytes. It does not matter what the DECIMAL representation of the number is. That's just for you and I to see. The computer operates on 4-byte floats, the BINARY representation.

Ah so it always assume the maximum [ The precision of four-byte numbers is processor dependent. float(18) defines a floating point type with at least 18 binary digits of precision in the mantissa. A 4‑byte floating point field is allocated for it, which has 23 bits of precision.] the float byte always has space in the value for it to be the maximum Fascinating indeed.

It's not assuming anything. It's a specification. It is what it is. It's four bytes of memory. https://en.wikipedia.org/wiki/Single-precision_floating-point_format

The first computer was a bunch of relays. You can build one yourself. https://www.electronixandmore.com/projects/relaycomputer/index.html That's it. That's all computation is. Everything else is just more bits, faster bits, tinier bits. But it's all bits.

Say when a mesh moves. And each vert world position is updated, the length of those numbers on world position render update does not matter in regard to the allocation of space required to represent the potential maximum of those numbers? or is it that a smaller length value single integer 1, can update faster than a larger length value floating 0.99999, so although the size and speed gain is of course not huge, theoretically speaking if “1” was 8 bits of a binary bite, 0.9 was 16 bits of 2 binary byte. So representing 0.999999 would be a far greater chore than merely representing 1. But in terms of making the memory display this value so that it knows this value, and renders the mesh vert in the correct micro-decimal spacing shouldn’t it theoretically take longer to do this? Unless the maximum is always used. So all values are attempted to be represented like 0.00000, and so all values are actually worth significantly more than 1 integer in bytes? sorry, I am not trying to argue about it or anything

I can add nothing more because it is clear you have understood essentially nothing I have written above.

Ok but at some point then if the logic is correct, then at some point the screen is told to display more characters than 4 bytes worth.

@AnimalMan , try to understand binary in general, and then how it can be used to represent decimals (this explanation is here: https://en.wikipedia.org/wiki/Single-precision_floating-point_format#Converting_binary32_to_decimal ) and then maybe we can talk about halfs and whether you are talking CPU or GPU etc.

“In binary, each place value is 2 times bigger than the last (ie increased by the power of 2). Working out the value of 10 101: That means that 10101 as a decimal number = 16 + 4 + 1 = 21.” https://www.bbc.co.uk/bitesize/guid...each place value is 2 times,= 16 + 4 + 1 = 21.

It mean that because it always considers the max length number even if that length is filled with zeros ?? is that a no? For the positions are floating point they are always max length? And never as small as a single binary digit. Even if you said to the transform, your position X is the integer of 0.9999 it would run through saying position X is 1.0000 so it would read every binary value of that number even the zeroes. So even if you rounded to a lower accuracy it would still be the full length you are computing. I got it

So a double universe would compute twice as many as a floating universe that would translate 5 or 6 times more binary values than an int single digit universe?

So if everything moved by the smallest possible unit in the Floating universe. Every value produced will be max floating length: which would be 5 times more then moving by the smallest possible value in a single digit universe. this is to say a single digit universe smallest possible unit is a single digit number While in a floating universe smallest possible init is a 6 digit number. For example let’s say we moved 0.00001 + 0.00001 floating world or we moved 1 + 1 single world Every value is considered for each number that is used. but since you need 1,000,000 in single universe then you need to use max length always. Otherwise single universe would wrap at digit 9

Floating can wrap from approx 0.00000 to approx 9.99999 before accuracy loss. while an int would wrap 0, 9 as all lengths are 1

That is true, but you failed to understand the concept of datatypes. The type float is defined as a 32 bit floating point number as specifed in the IEEE 754 standard. The data type int is an integer number (whole number) that has exactly 32 bits, no matter what value you store in it. A short is also an integer type with exactly 16 bits while a byte is an integer type with exactly 8 bits. That's what datatypes are. They define an exact memory layout and how they are composed . You can not store a variable amount of "bits" as computers / cpus have a bus with a certain bit count. Nowadays we usually have 64 bit systems. So every register is at least 64 bits in size or a multiple of that. An old 8 bit processor has an 8 bit data bus. You can not store 11 bits as memory is organised in chunks of 8 bits. Also, as I explained a month ago on one of your other questions, if you could store individual bits, how would you tell where a number starts or ends? That would require some kind of seperator which is not possible in a binary system. There are only two possible values, 1 or 0. That's why datatypes have an exactly specifed length. When it comes to storing data in files, in order to save space there are tricks to utilize individual bits. However such systems would still produce overhead in order to be able to tell individual values apart. Every beginner has beginner questions and that's totally fine. However you have an exceptional talent to ask questions, get the concept explained 5 times from different angles and you still somehow get it wrong or not at all ^^. This is kinda frustrating. Though at the same time I'm really impressed by your endurance and determination. So I hope at some point you can actually understand those concepts. At the moment it seems you have several wrong pictures in your mind how computers work in general. That was kinda obvious from your thread about compression. Please don't take that as a personal attack. We try our best to explain most concepts as accurate as possible. However you have to pay more attention to what we actually say / write. More often than not you straight out ignore what we've said and then you switch to your own interpretation / imagination how you think it would work. I know that I probably sound a bit condescending now, but to be honest, I can't find better words to describe the situation English is not my first language and it's not that easy to find the right words. It's not my intention to make you feel bad. We all just try our best to make you understand what we love which is computer science. We want you to be part of it. However we can't do the learning for you.

A decimal isn’t actually having less space being used. A decimal isn’t actually a fraction of 0010 0000 it’s is 0010 0000 .(in bin) 0010 0000 x 5 but as I said not but of an utterance of a word more shall be said on the subject.

On top of what was already said, maybe this helps: https://www.h-schmidt.net/FloatConverter/IEEE754.html Type in some numbers at the top. It will show you the binary representation which, for a float, is always 32 bits long, no matter the decimal value stored. It also helps grasp or rather visualize the concept of floating point precision issues, as the number you want to store in a float eventually cant be represented anymore and thus the actual value stored will be different (at least in decimal, to the computer there is no inbetween for that data type).

More of the abstract values are used to represent the binary digits. so if I make a save file And I save 8.04474099145 it saves lots of 0010 xxxx ‘s and so therefor. The memory of the number is longer. But as I had already said that I had understood, that the maximum length value is assumed to always be used but the language barrier arises in kurts reaction to the term assume From therein whence all comments had become tunnel visions and the wormholed. so not but more of a gasp of a whisper will be uttered upon the matters of the subject.

Post 2 I await apology and the opportunity to grant forgiveness I had just been suggesting that when this computer had fallen to earth it was like a big block of finely made art this was the block of the computer chip. A chip off the old block. And here the computer chip contained 16 abstract values for representing a language. And these abstract values were part of a product design. And that product design when fell into the hands of the humans was replicated. But the original product was provided to multiple species. It was the one species who sent the cubes out into space and thrown them at the planets for hope to induce a level of development in the species to coerce them into performing a primitive trade.

But that question is slightly off topic, which is why none of us wanted to reply to it clearly with a yes / no and told you to read about binary representation of floats. Because for example: 0.00000011920928955078125 will be accurately represented as 00110100000000000000000000000000 While: 0.1 is: 00111101110011001100110011001101 (which by the way converts back to decimal as : 0.100000001490116119384765625 )

Are you the same person who was asking about animation packaging and compression using text-based formats a few weeks ago?

It's not an assumption about the length of the contained number. It's about the fact that within the program all floats must be the same, predetermined length because of multiple fundamental aspects of how a computer works. For one, when numbers are operated on (added, subtracted, multiplied, etc.) the CPU has sets of registers that are the correct size to fit different data types in. When a CPU is asked operate on two values of different sizes it essentially converts the shorter value to the same type as the longer value by padding out with 0's, because otherwise the operation will not work. For another, computer memory is just one single, long line of billions of billions of bits. When reading a variable a computer needs to know both the position to start reading at, and the number of bytes to read. This would be prohibitively complicated if fundamental data types could be individually sized. (How do we know how long a variable is? Oh, we can put that in a variable... see the issue? Or, what happens if we have a variable with a small number and then need to put a big value in it?) So, no, floats aren't 32 bits long because they assume the maximum value. They are 32 bits long because someone had to pick a standardised length, and having standardised lengths for basic data types is a fundamental aspect of how computers work.

I guess I didn’t realize it was kind of a boomerang question that I sort of knew the answer before I asked. so as the memory chip contain lots of storage modules, how do we know when a number ends? How do we know we have stopped allocating a number? Yeah that’s why there is a band this entire allocation is the total length of number and whether you wanted it an int or a float the length is the same. And it’s so that we don’t have to use more storage modules to flag that a number had ended if all modules are grouped into lengths that incorporate any such value from the smallest unit to the largest unit capabale of being displayed in those bands. we know simply by the number of module we are checking whether or not it is the 4th element of the memory chip. Or 5th element. For example. Due to kurts number length on post 1 .

Regarding the alien rant I was just trying to lighten up the aggressive judgement bunny had bestowed upon me and the questions I ask. As beginners i understand your frustration, but the engineering qualities of the machine are indeed relevant to understanding the logic of its assembly.