so I am checking out unity's transform sync compression, pretty much just did a dynamic (short) cast on the value. But if value is really small, then short cast will return 0. How does this the value actually been passed to client with compression? What I am doing right now is before I cast to short, I times a precision value(100000) then cast to short, and when client receives the short value, I then divide the precision value(which will return me the correct floating precision). But seems like the sync is still not correct, because I am using snapshot interpolation.
I'm not sure I understand you question. But what I understand UNET compresses angles to a short (16bits)... but it rounds them to whole degrees. i.e. 183.43deg becomes 183. With a short you could 16 bits you *could* get 360/(2^16) or 0.005deg of resolution, but instead UNET's compression gives you 1.000deg of resolution (200x worse than it should be for 16 bits!). What I'd suggest is make a custom compressed float reader/writer. Something like this WriteFloatCompressed(float myValue, CompressionArgsargs) float = ReadFloatCompressed(CompressionArgsargs) Class CompressionArgs float minValue float maxValue int bytes //i.e. a short is 2bytes