I have very little programming/scripting knowledge but this has always confused me.
If an int value stores whole numbers and a float stores numbers including any decimal places, why does the int type exist? Doesn’t having the float type make it obsolete or am I missing something. By only using the int type if you can might help optimize the script… maybe?
Floating point numbers aren’t always guaranteed to be accurate for whole numbers, and not always necessary anyway. If you were storing a number to keep track of how many lives you have, or ammo, you don’t need a variable that can store decimal places, because you’re only going to be working with whole numbers. Using floats for these situations can cause problems because slight inaccuracies can creep in and you can end up with numbers like 5.0000001, which could cause problems in your code if you’re trying to check if a value is equal to exactly 5.
I’m sure there are various other technical reasons, but if you are only going to use a variable for whole numbers, don’t use a float.
Stupid how I wouldn’t even think of that, I just assume basic computer logic would know what specific number I’m after. Makes a lot more sense now and instantly I feel at ease about what type to use in which situation.
Also, in olden times when ram was low, it was important to keep every little bit of ram available because you need it later. Floating point values take up more bytes than ints.
I thought there might be some optimization reasoning going on with the two types being available, didn’t think it would be that much a difference but I guess back then it was more of an issue.
To my knowledge, In C based languages both floats and ints are 4 bytes. Although floats require much more processing, hence the need for CPUs to have Floating Point Processors. http://en.wikipedia.org/wiki/Floating-point_unit
Floats require extra processing and are inherently slower as they are in a special notated format, whereas Ints are in straight binary, and can be manipulated right away.
Note: that doesn’t mean go converting all floats to ints and making a fully fixed point system. They are both still extremely fast, and you should use the format that fits your current needs.
Doing basic math operations with int is around 30% faster than float. If you need to save RAM and your integer numbers are small enough, you can use short (System.Int16) or even byte instead of int, however int32 is a little faster than both. On a desktop CPU anyway; not sure about ARM. Oh yeah, and certain operations like various bitwise operators can only be done on integers.
Nowadays, both floats and ints are 32 bits. Though you can still declare 16bit ints if you need to. But yeah, given the current state of memory, generally speaking (there are exceptions) you should use the right data type for the intent of your logic. Working with whole numbers? int! Working with real numbers? float!
I think you’re coming at this from the concept of how number systems encompass each other in the real world. Integers are a subset of real numbers (i.e. floats), where the most complex numbers (irrational, rational) encompass all the simpler types (integers, whole, natural). In computing terms, it’s almost the opposite. The simplest type, a bit (1 and 0), is the all encompassing number, and everything else is described/approximated as a collection bits.
Describing the integer 100 would be: 00000000 00000000 00000000 01100100
Describing the real number 0.1 (as a float) can only be approximated with: 00111101110011001100110011001101
(which if you convert back to a real number is actually: 0.10000000149011612)
To my knowledge, In C based languages both floats and ints are 4 bytes
Actually, in C/C++ type short is 2 bytes, type long is 4 bytes, but type int’s size varies by platform and compiler. It’s sloppy programming in C/C++ to assume int = 4 bytes, even though that’s true in most cases.
It’s not just about memory, but also about the speed at which a CPU can manipulate a value. On a 16-bit Windows system–yeah, been awhile–sizeof(int) would typically be 2 bytes.
Correction. In the C/C++ standard float/int etc are undefined. It is considered an implementation detail and will vary from system to system, check with your hardware manufacturer for details (not such a silly idea for many things in which you’d use C). If you want size gurantees see C++11 or C99’s fixed width type headers
Good C/C++ code doesn’t use “int” or “short int” it will use int64_t int32_t or int16_t for that result. Also, it’s kind of silly to worry about integer sizes to save bytes of ram when you have billions of them, unless you’re working on something like a 64k system, or say you’re doing 32 bit RGBA image encoding or otherwise encoding large amounts of data then sure, it documents your code, saves significant memory and makes complete sense. But using a short because you don’t plan for the variable value to ever be more than 16 bits to save a byte or two here and there (lol?) would just risk causing a strange bug due to a user/client not expecting you to do that (after all the language run-time will cast it down and other than perhaps a compile time warning silently discard the extra data).
The main use of different sizes is in things like encoding large amounts of data, highly optimized algorithms, or signal processing where the shear volume of “objects” makes a difference or something like a PCI card might expect 8 bits of data not 32, or maybe you want to tweak an algorithm on some hardware (fpga? asic?) for max performance and it can chew through vectors of say 16 or 128 bit buffers like crazy, etc etc.