Simply put, the labels "16-bit," "32-bit" or "64-bit," when applied to a microprocessor, characterize the processor's data stream. Although you may have heard the term "64-bit code," this designates code that operates on 64-bit data.
In more specific terms, the labels "64-bit," 32-bit," etc. designate the number of bits that each of the processor's general-purpose registers (GPRs) can hold. So when someone uses the term "64-bit processor," what they mean is "a processor with GPRs that store 64-bit numbers." And in the same vein, a "64-bit instruction" is an instruction that operates on 64-bit numbers.
In the diagram above, I've tried my best to modify an older diagram in order to make my point. A quick recap, in case you don't remember the original diagram: black boxes are code, white boxes are data, and gray boxes are results. Also, don't take the instruction and code "sizes" too literally, since they're intended to convey a general feel for what it means to "widen" a processor from 32 bits to 64 bits.
You should notice that not all of the data in either memory, the cache, or the registers is 64-bit data. Rather, the data sizes are mixed, with 64 bits being the widest. We'll discuss why this is and what it means, shortly. (I should've made the outgoing data stream on the 64-bit processor a mix of 64-bit and 32-bit data, but it would've been too much work to go in and change all of those boxes like that. As it is, I just used the resize function the whole batch and left it at that.)
Note that in the 64-bit CPU pictured above, the width of the code stream has not changed; the same-sized opcode could theoretically represent an instruction that operates on 32-bit numbers or an instruction that operates on 64-bit numbers, depending on what the opcode's default data size is. (Fore more on opcodes, see this page. We'll talk about the specifics of x86-64 opcodes in the next section.) On the other hand, the width of the data stream has doubled. In order to accommodate the wider data stream, the sizes of the processor's registers and the sizes of the internal data pathsthat feed those registers must be doubled.
Now let's take a look at two programming models, one for a 32-bit processor and another for a 64-bit processor.
The registers in the 64-bit CPU pictured above are twice as wide as those in the 32-bit CPU, but the size of the instruction register (IR) that holds the currently executing instruction is the same in both processors. Again, the data stream has doubled in size, but the instruction stream has not. Finally, you might also also note that the program counter (PC) is doubled in size. We'll talk about the reason for this, shortly.
Now, what I just told you above was the simple answer to the question, What is 64-bit computing? If we take into account the fact that the data stream is made up of multiple types of data—a fact hinted at in the first comparative diagram above—then the answer gets a bit more complicated.
For the simple processor pictured above, the two types of data that it can process are integer data and address data. Ultimately, addresses are really just integers that designate a memory address, so address data is just a special type of integer data. Hence, both data types are stored in the GPRs, and both integer and address calculations are done by the ALU.
Current 64-bit applications
Now that we know what 64-bit computing is, let's take a look at the benefits of increased integer and data sizes.
Dynamic range
The main thing that a wider integer gives you is increased dynamic range. Instead of defining the term "dynamic range," I'll just show you how it works.
In the base-10 number system to which we're all accustomed, you can represent a maximum of ten integers (0 to 9) with a single digit. This is because base-10 has ten different symbols with which to represent numbers. To represent more than ten integers you need to add another digit, using a combination of two symbols chosen from among the set of ten to represent any one of 100 integers (00 to 99). The general formula that you can use to compute the number of integers (dynamic range, or DR) that you can represent with an n-digit base-ten number is:
DR = 10n
So a 1-digit number gives you 101 = 10 possible integers, a 2-digit number 102 = 100 integers, a 3-digit number 103 = 1000 integers, and so on.
The base-2, or "binary," number system that computers use has only two symbols with which to represent integers: 0 and 1. Thus, a single-digit binary number allows you to represent only two integers, 0 and 1. With a two-digit (or "2-bit") binary, you can represent four integers by combining the two symbols (0 and 1) in any of the following four ways:
- 00 = 0
- 01 = 1
- 10 = 2
- 11 = 3
Similarly, a 3-bit binary number gives you eight possible combinations, which you can use to represent eight different integers. As you increase the number of bits, you increase the number of integers you can represent. In general, n bits will allow you to represent 2 n integers in binary. So a 4-bit binary number can represent 24 or 16 integers, an 8-bit number gives you 28=256 integers, and so on.
So in moving from a 32-bit GPR to a 64-bit GPR, the range of integers that a processor can manipulate goes from 232 = 4.3e9 to 264 = 1.8e19. The dynamic range, then, increases by a factor of 4.3 billion. Thus a 64-bit integer can represent a much larger range of numbers than a 32-bit integer.
The benefits of increased dynamic range, or, how the existing 64-bit computing market uses 64-bit integers
Since addresses are just special-purpose integers, an ALU and register combination that can handle more possible integer values can also handle that many more possible addresses. With all the recent press coverage that 64-bit architectures have garnered, it's fairly common knowledge that a 32-bit processor can address at most 4GB of memory. (Remember our 232 = 4.3 billion number? That 4.3 billion bytes is about 4GB.) A 64-bit architecture could theoretically, by contrast, address up to 18 million terabytes.
Of course, there's a big difference between the amount of address space that a 64-bit address value could theoretically yield and the actual sizes of the virtual and physical address spaces that a given 64-bit architecture supports. In the case of x86-64, the virtual address space is 48-bit, which makes for about 282 terabytes of virtual address space. (To borrow a line from my old IA-64 preview article, I'm tempted to say about this number what Bill Gates supposedly said about 640K back in the DOS days-"282 terabytes ought to be enough for anybody." But don't quote me on that in 10 years when Quake 16 takes up three or four million terabytes of hard disk space.) x86-64's physical address space is 40-bit, which can support about 1 terabyte of physical memory.
So, what do you do with over 4GB of memory? Well, caching a very large database in it is a start. Back-end servers for mammoth databases are one place where 64 bits have long been a requirement, so it's no surprise to see upcoming 64-bit offerings billed as capable database platforms.
On the media and content creation side of things, folks who work with very large 2D image files also appreciate the extra RAM. And a related, much sexier application domain where large amounts of memory come in handy is in simulation and modeling. Under this heading you could put various CAD tools and 3D rendering programs, as well as things like weather and scientific simulations, and even, as I've already half-jokingly referred to, realtime 3D games. Though the current crop of 3D games wouldn't benefit from greater than 4GB of RAM, it is quite possible that we'll see a game that benefits from greater than 4GB RAM within the next five years. But we'll discuss the possibilities for 64 bits in the consumer space later in the article, so let's not get ahead of ourselves.
There is one drawback to the increase in memory space that 64-bit addressing affords. Since memory address values (or pointers, in programmer lingo) are now twice as large, they take up twice as much cache space. Pointers normally make up only a fraction of all the data in the cache, but when that fraction doubles it can squeeze other useful data out of the cache and degrade performance slightly.
Some who read the discussion above would no doubt point out that Xeon systems are available with more than 4GB. Furthermore, Intel supposedly has a fairly simple hack that they could implement to allow their 32-bit systems to address up to 512GB of memory. Still, the cleanest and most future-proof way to address the 4GB ceiling is a larger pointer.
Some applications, mostly in the realm of scientific computing (MATLAB, Mathematica, MAPLE, etc.) and simulations, require 64-bit integers because they work with numbers outside the dynamic range of 32-bit integers. When the result of a calculation exceeds the range of possible integer values, you get a situation called either overflow (i.e. the result was greater than the highest positive integer) orunderflow (i.e. the result was less than the largest negative integer). When this happens, the number you get in the register isn't the right answer. There's a bit in the x86's processor status word (see this page for a bit more on the PSW) that allows you to check to see if an integer has just exceeded the processor's dynamic range, so you know that the result is bogus. Such situations are very, very rare in integer applications. As an engineering student I never ran into this problem, although I did run into the somewhat related problem of floating-point round-off error a few times.
Programmers who run into integer overflow or underflow problems on a 32-bit platform do have the option of using a 64-bit integer construct provided by a higher level language like C. In such cases, the compiler uses two registers per integer, one for each half of the integer, to do 64-bit calculations in 32-bit hardware. This has obvious performance drawbacks, making it less desirable than a true 64-bit integer implementation.
Finally, there is another application domain for which 64-bit integers can offer real benefits: cryptography. Most popular encryption schemes rely on the multiplication and factoring of very large integers, and the larger the integers the more secure the encryption. As we'll discuss in the final section, AMD is hoping that the growing demand for tighter security and more encryption in the mainstream business and consumer computing markets will make a cheap, 64-bit, x86-compatible processor attractive.
At this point, I should make a quick note of a fact that I'll refer to again in the article's conclusion: increased performance was not mentioned above as a straightforward, across-the-board benefit of increased dynamic range. As I stated previously, 64-bit integer code runs slowly on a 32-bit machine, due to the fact that the 64-bit computations have to be split apart and processed as two separate 32-bit computations. So you could say that there's a performance penalty for running 64-bit integer code on a 32-bit machine; this penalty is absent when running the same code on a 64-bit machine, since the computation doesn't have to be split in two. The take-home point here is that only applications that require and use 64-bit integers will see a performance increase on 64-bit hardware that is due solely to a 64-bit processor's wider registers and increased dynamic range. So there's no magical performance boost inherent in the move from 32 bits to 64 bits, as people are often led to think by journalists who write things like, "64-bit computers can processes twice as much data per clock cycle as their 32-bit counterparts." Technically, this is true in a very restricted sense, but it would be better to say the following: "64-bit computers can process numbers that are 4.3 billion times as large as those processed by their 32-bit counterparts." It sounds a lot less sexy because it is, but at least no one is misled into thinking that 64-bitness makes a computer somehow twice as fast.
Ref: http://arstechnica.com/gadgets/2002/03/an-introduction-to-64-bit-computing-and-x86-64/
No comments:
Post a Comment