简体   繁体   中英

Why would I use byte, double, long, etc. when I could just use int?

I'm a pretty new programmer and I'm using Java. My teacher said that there was many types of integers. I don't know when to use them. I know they have different sizes, but why not use the biggest size all the time? Any reply would be awesome!!!

Sometimes, when you're building massive applications that could take up 2+ GB of memory, you really want to be restrictive about what primitive type you want to use. Remember:

  • int takes up 32 bits of memory
  • short takes up 16 bits of memory, 1/2 that of int
  • byte is even smaller, 8 bits.

See this java tutorial about primitive types: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html

The space taken up by each type really matters if you're handling large data sets. For example, if your program has an array of 1 million int s, then you're taking up 3.81 MB of RAM. Now let's say you know for certain that those 1,000,000 numbers are only going to be in the range of 1-10. Why not, then use a byte array? 1 million byte s only take up 976 Kilobytes, less than 1 MB.

You always want to use the number type that is just "large" enough to fit, just as you wouldn't put an extra-large T-shirt on a newborn baby.

So you could. Memory is cheap these days and assuming you are writing a simple program it is probably not a big deal.

Back in the days when memory was expensive, you needed to be a lot more careful with how much memory you use.

Let's say if you are word processing on an IBM 5100 as in one of the first PCs in the 70s -- which has a minimum of 16KB RAM (unimaginable these days) -- if you use 64-bit values all day, you can keep at most 2048 characters without any memory for the word processing program itself, that's not enough to hold what I'm typing right now!

Knowing that English has a limited number of characters and symbols, if you choose to use ASCII to represent the text, you would use 8 bits (or 1 byte) per character which allows you to go up to about 16,000 characters, and that's quite a bit more room for typing.

Generally you will use a data type that's just big enough to hold the biggest number you might need to save on memory. Let's say you are writing a database for the IRS to keep track of all the tax IDs, if you are able to save 1 bit of memory per record that's billions of bits (gigabytes!) of memory savings.

The ones that can hold higher numbers use more memory. Using more memory is bad. One int verses one byte is not a big difference right now. But if you write big programs in the future, the used memory adds up.

Also you said double in the title. A double is not like an int. It does hold a number, but it can have decimal places (eg. 2.36), unlike a int which can only hold numbers like 8.

因为我们喜欢专业,所以字节比int占用更少的内存,而double可以包含小数位。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM