What does '64 bit' mean? Lame question, but hear me out :)

L

Larry David

Ok, first of all, let's get the obvious stuff out of the way. I'm an idiot. So please indulge me for a moment. Consider it an act of "community service"....

What does "64bit" mean to your friendly neighborhood C# programmer? The standard answer I get from computer sales people is: "It means that the CPU can process 64 bits of data at a time instead of 32." Ok... I guess I *kind* of understand what that means at an intuitive level, but what does it mean in practice? Consider the following code:

long l=1;
for(int i=0;i<5;i++){
Console.WriteLine("Emo says "+l);
l+=1;
}

How would this code run differently on a 64 bit processor as opposed to a 32 bit processor? Will it run twice as fast since the instructions are processed "64 bits at a time"? Will the 64 bit (long) variable 'l' be incremented more efficiently since now it can be done in a single processor instruction?

Now I want to ask about memory. I think this is the one benefit of 64bit computing that I DO understand. In a 32bit system, a memory pointer can only address 2^32 worth of process memory versus 2^64 worth of memory (wow!) in a 64bit system. I can see how this would be a major advantage for databases like SQL Server which could easily allocate over 4gigs of memory -- but is this a real advantage for a typical C# application?

Finally, I want to ask about interoperability. If I compile a 32bit C# app, will the ADO.NET code that it contains be able to communicate with the 64bit version of SQL Server?


Thanks for helping a newbie,

Larry
 
B

Bruce Wood

I'm no guru when it comes to 64 bit, but here's my understanding.

"64 bit", in terms of raw speed, means "faster" for some operations,
although likely not the example you gave. Why? Because when moving
large amounts of data around, the processor can move them 64 bits at a
time instead of 32 bits of a time. Sort of like doubling the lanes on a
freeway. Now, if you're only moving one car around, doubling the lanes
makes no difference. It makes a difference only at rush hour.

You're right about the pointer thing. 64 bit processors can handle 2
billion times more memory than 32 bit processors. That's a lot of
memory. Of course, that's what they said about 64 KB way back when. :)

As far as compatibility, this is what I've heard.

Your example of talking to SQL Server through ADO.NET is the easiest to
answer: all will be well. Because data moving between ADO.NET and SQL
Server is (usually) serialized across a network, neither end knows what
processor the other end is using or even what language it's written in.
Ahh, the beauty of decoupling.

A more interesting question is whether your 64-bit .NET application
will be able to call old 32-bit DLLs to do things, or vice versa:
whether your 32-bit .NET application will be able to call
64-bit-compiled DLLs to do things. I know that all 64-bit processors
have 32-bit emulators built in, so they'll "downshift" to run 32-bit
code, but I can't recall what was said about one type calling the
other. I'll leave that to wiser folk.
 
G

Greg Merideth

AMD has a whitepaper available on some of the key points you can benefit
from 64-bit programming.

http://www.amd.com/us-en/assets/content_type/DownloadableAssets/dwamd_Value_of_AMD64_White_Paper.pdf

As far as your example goes, from 32 to 64 bit it would make no
difference what so ever. Your application needs to be programmed to
take advantage of the extra system capabilities.

Consider what games could do graphics wise with directX 6 versus what
they can do now with a high end video card and directX 9. Running an
ancient game (think of kings quest) on a system with directx 9 will not
make the game look any nicer, you need to write something to take
advantage of what dx9 and the hardware can offer.

DOS app's that were written to use extended memory (beyond the 640k
barrier) suddenly didn't get to use 4gb of ram when run under windows
with ton's of memory, they needed to be rewritten to take advantage of
the architecture.

Same with 64 bit. As a typical programmer you may not ever notice
different in your coding but you can bet the compiler will know what to
do with those extra registers.
 
C

Christoph Nahr

"64 bit" is not a clearly defined label. It means that *something*
inside the CPU is 64 bit wide but it doesn't say what!

Generally, though, a 64-bit CPU can be expected to have a "word size"
of 64 bit. A "word" is the unit of data that the CPU can transport
and process without having to slice it up into smaller pieces.

So a 64-bit CPU should be able to perform 64-bit integer arithmetic at
the same speed as today's 32-bit CPUs perform 32-bit arithmetic. And
the size of a memory address should be 64 bit as well, which gives you
the increased memory range you mentioned.

But things immediately get a bit blurry again because the *physical*
memory range of a CPU might very well be restricted to less than 64
bit for technical or cost considerations; for instance, early Intel
32-bit CPUs actually had a 24-bit memory bus. On the other hand,
current 32-bit CPUs can actually process up to 80 bits internally at
once; but only in the floating-point processing unit (FPU).

And then there's the problem with wasted space. Lots of data actually
fits in 32 bits just fine, which is one reason why we're so slow to
move to 64-bit systems. Now when you have a 64-bit CPU but you
actually just need 32-bit numbers you have two choices: pack two
32-bit numbers each into a 64-bit word and waste time with packing &
unpacking; or only put one 32-bit number in a 64-bit word and waste
half the memory space, in main memory and in the CPU cache!

So whether a 64-bit CPU will actually speed up your application is
rather doubtful. You can only expect a significant gain if you're
already processing 64-bit integers. Likewise, the increased memory
range will only benefit you directly if you're rummaging through huge
databases; however, since operating systems and applications tend to
get bigger and bigger anyway, this should still benefit the user who
runs multiple programs at once.

The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...
 
C

Cor Ligthert

Larry,

Mainframes use much longer more bits in there processors (registers).

It means that with less cycles there can be things quicker done. This was in
the beginning beside a very limited Operationset wherefore as well are
needed more cycles the main difference between a microprocessor and a
mainframe processor.

It is as well needed for memory adressing what is in fact much more done
than all your processing you make in your programs yourself.

When you know how much was done in the sixties with 8kb than you will be
suprissed why the memory from now is not enough, however now we want more
and more fast multimedia processing and for that are huge memorys needed to
get it done well.

Just my thought,

Cor
 
N

Niki Estner

Larry David said:
...
Consider the following code:

long l=1;
for(int i=0;i<5;i++){
Console.WriteLine("Emo says "+l);
l+=1;
}
How would this code run differently on a 64 bit processor as opposed
to
a 32 bit processor?

I didn't test it, but I'm quite sure that "Console.WriteLine" takes > 99% of
the time in this sample. This operation is probably memory-bound, i.e. the
CPU spends most of the time waiting for the RAM. A 64-bit memory interface
would probably make this a lot faster.
Will it run twice as fast since the instructions are
processed "64 bits at a time"?

Depends on many many other factors, like cache size, memory speed, graphics
speed...
Will the 64 bit (long) variable 'l' be
incremented more efficiently since now it can be done in a single
processor
instruction?

Probably yes. Current 32-bit processors do have 64-bit and 128-bit
registers/operations (MMX&SSE), but AFAIK neither the .net JIT nor VC++'s
native compiler emit these, so adding two longs takes 2 additions on a
current processor.
Another thing you should keep in mind is that the JIT (just as any good
compiler) tries to enregister variables, to reduce slow memory accesses.
Enregistering a 64-bit variable to 2 32-bit registers is quite expensive as
x86 processors don't have that many registers.
Now I want to ask about memory. I think this is the one benefit of
64bit
computing that I DO understand. In a 32bit system, a memory pointer can
only
address 2^32 worth of process memory versus 2^64 worth of memory (wow!) in
a
64bit system. I can see how this would be a major advantage for databases
like SQL Server which could easily allocate over 4gigs of memory -- but is
this a real advantage for a typical C# application?

You can turn of the GC and save lots of time ;-)

Seriously, if you need that much memory (e.g. for processing high-res
medical tomography data), you'll benefit from it; Otherwise, you probably
won't. A large address space does have other benefits (e.g. disc access is
often done through the memory interface as well, where 4 GB isn't that much
any more), but I think .net mostly shields you from those, because of its
portability.

Niki
 
C

chrisv

Christoph said:
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...

Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...
 
K

Keith R. Williams

Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...

Well, if you believe Moore's law will remain in effect... Memory
doubles approximately every 1.5years. So it should have taken 24 years
(1.5*16) years to run out of 32 bits, which is not *too* far off.
Again ASSuming Moore holds, we should have 48 years left in 64b
processors. ...and another 96 years in 128bit processors. ;-)

OTOH, a 128bit FXU would go a long way in eliminating those dreaded
floats. ;-)
 
J

J. Jones

Larry said:
What does "64bit" mean to your friendly neighborhood C# programmer?
The standard answer I get from computer sales people is: "It means that
the CPU can process 64 bits of data at a time instead of 32."

64-bit, IA64, et al. is 97% marketing hype that has little/no value to
consumers. However, microprocessor manufacturers are always looking for ways
to sell more chips, so what you are seeing is the start of a marketing blitz
which will undoubtedly focus on 'more is better'.

The fact of the matter is that 64-bit architectures will only benefit
large-scale (database) servers in that it will allow for a greatly expanded
addressable memory space. 64-bit file access is already possible under Win32.

In answer to your question, 64-bit means nothing to your 'friendly neighborhood
C# programmer'.
 
G

GSV Three Minds in a Can

from the said:
Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...

The steps are not linear though .. 32 bit was 64k times more address
space than 16 bit.

64 bit is 4,194,304k times bigger address space than 32bit .. a rather
taller step.

The next step to 128 bits is ridiculous - there isn't enough memory on
the planet to require a 128bit address right now (however I can think of
some uses for 128 bit math!).

Actually for most of us the main advantage is going to be faster 64bit
(and up) maths and more 64bit (and up) registers, and higher bandwidth.
All of which are really useful for video/photo editing and encoding and
similar stuff (and halfway useful for some maths intensive stuff).

'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.
 
K

Keith R. Williams

The steps are not linear though .. 32 bit was 64k times more address
space than 16 bit.

Of course they aren't. Moore isn't either. Every bit doubles the
address space. Moore's "law" says that transistors (thus memory cells)
double every 18 months. Thus address bits are linear with time
(.67bits/year), if you believe Moore.
64 bit is 4,194,304k times bigger address space than 32bit .. a rather
taller step.

Nope, it's only 32 "Moore-intervals" bigger. Moore is logarithmic too.
The next step to 128 bits is ridiculous - there isn't enough memory on
the planet to require a 128bit address right now (however I can think of
some uses for 128 bit math!).

Oh, your not a believer in Moore. Tsk, tsk.
Actually for most of us the main advantage is going to be faster 64bit
(and up) maths and more 64bit (and up) registers, and higher bandwidth.
All of which are really useful for video/photo editing and encoding and
similar stuff (and halfway useful for some maths intensive stuff).

In this particular case, there is also an advantage to more registers.
But we're getting close to the virtual memory limit (which is in
reality about 2GB, not 4GB). 64b solves that problem for at least my
lifetime (less than 96 years ;-).
'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.

No reason for it to be larger at all. No reason for it to be slower
either. Since it's not doing anything, there is no reason to assume it
will be faster though.
 
Y

Yousuf Khan

Bruce said:
A more interesting question is whether your 64-bit .NET application
will be able to call old 32-bit DLLs to do things, or vice versa:
whether your 32-bit .NET application will be able to call
64-bit-compiled DLLs to do things. I know that all 64-bit processors
have 32-bit emulators built in, so they'll "downshift" to run 32-bit
code, but I can't recall what was said about one type calling the
other. I'll leave that to wiser folk.

Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.

Yousuf Khan
 
Y

Yousuf Khan

Christoph said:
"64 bit" is not a clearly defined label. It means that *something*
inside the CPU is 64 bit wide but it doesn't say what!

Generally, though, a 64-bit CPU can be expected to have a "word size"
of 64 bit. A "word" is the unit of data that the CPU can transport
and process without having to slice it up into smaller pieces.

Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).

I think what you're really trying talk about is called the "register size".
And then there's the problem with wasted space. Lots of data actually
fits in 32 bits just fine, which is one reason why we're so slow to
move to 64-bit systems. Now when you have a 64-bit CPU but you
actually just need 32-bit numbers you have two choices: pack two
32-bit numbers each into a 64-bit word and waste time with packing &
unpacking; or only put one 32-bit number in a 64-bit word and waste
half the memory space, in main memory and in the CPU cache!

There's not necessarily any wasted space, it depends on the 64-bit model
that Microsoft adopted for Windows. If you look at this link, it
discusses the various variations of the 64-bit model, such as LP64,
ILP64, & LLP64. I believe that Microsoft has chosen the LP64 model,
which means longs and pointers are 64-bit, but integers remain 32-bit.

64-BIT PROGRAMMING MODELS
http://www.opengroup.org/public/tech/aspen/lp64_wp.htm

LP64 recognizes the fact that perhaps most calculations won't require
using 64-bit integers (if you /really/ do need the 64-bit integer use
long), but their memory addressing modes certainly will.
So whether a 64-bit CPU will actually speed up your application is
rather doubtful. You can only expect a significant gain if you're
already processing 64-bit integers. Likewise, the increased memory
range will only benefit you directly if you're rummaging through huge
databases; however, since operating systems and applications tend to
get bigger and bigger anyway, this should still benefit the user who
runs multiple programs at once.

The 64-bit CPU will speed up your applications but not because of the
64-bit upgrade. Some CPU manufacturers have taken the opportunity to add
a lot of other features at the same time as they upgraded the registers.
For example, they took the opportunity to add faster memory interfaces
into the processor. They also doubled the number of registers in the
processor from 8 to 16 of them. So even if you never need to use the
full 64-bits, you still have access to twice as many 32-bit registers. Etc.
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...

Actually it only seemed that way because the Intel 16-bit x86
instruction set was really a 20-bit memory addressing model. In other
words, the Intel 16-bit was an extended version of 16-bit. If Intel had
used a pure 16-bit memory model, then the maximum limit of memory
would've been really 64KB, and we would've been ready to switch to a
pure 32-bit instruction set probably by 1982. I don't know if you
remember computers like the Commodore 64 or the Apple II, which were
pure 16-bit addressing models. Intel extended the life of 16-bit by
almost a decade because of this one kludge. But it was a kludge, and
eventually all kludges come to a screeching halt and everybody clamours
to get away from them.

Yousuf Khan
 
Y

Yousuf Khan

GSV said:
'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.

I can recall writing a fully functional "hello world" program in
16-bytes, most of the space was used up holding the letters for "hello
world", and the rest were the instructions. :)

Assembly language was a gas.

Yousuf Khan
 
K

keith

Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).

Yousuf, you're just so PC! The term "word" has been used so many
different ways it's not possible to tell what it is without defining the
architecture. FOr example, a S/360 "word" is 32bits. A "word" was
originally the term used for the size of the register(s), or "bitness", if
you must. It's changed meaning several times since, but there is no
standard "word".
I think what you're really trying talk about is called the "register
size".

"bitness". ;-)

<snip>
 
D

Derrick Coetzee [MSFT]

Larry said:
How would this code run differently on a 64 bit processor as opposed
to a 32 bit processor? Will it run twice as fast since the instructions
are processed "64 bits at a time"? Will the 64 bit (long) variable 'l'
be incremented more efficiently since now it can be done in a single
processor instruction?

I can think of several ways in which 64-bit processors impact everyday
programming, although much of it applies to low-level systems
programming rather than high-level applications development:

1. They speed up extended-precision arithmetic. Many number-theoretic
programs, such as encryption algorithms, use very large integers - say,
1024 bits. They do calculations on these a word at a time, so the less
words, the more quickly they can operate (or alternatively, the larger
integers they can use in the same time). Extended-precision integers are
also an important basic datatype in many functional languages.

2. You can encode more stuff in your pointers. Considering no real
machines will have even 2^40 bytes of memory, we suddenly have a lot of
free bits inside pointers that we can use to encode additional
information about those references, which is a very useful trick in
interpreters, virtual machines, and garbage collectors.

3. Internal fragmentation of bitfields is decreased. If you have three
20-bit fields, you would need three 32-bit words to hold them, but only
one 64-bit field, assuming you didn't want them crossing word boundaries
(a good assumption if you want to modify them quickly). Now imagine a
million 20-bit fields.

4. They speed up bit array access. You can load, store, and manipulate
an array of up to 64 bits in a single register quickly. Some operations
on larger bit arrays are also sped up, such as finding the first set bit
(you can skip blocks of 64 zero bits at a time).

5. They can be used to perform some 32-bit operations more efficiently.
For example, you can divide by a 32-bit number by a constant by
performing a 64-bit multiply followed by a shift. As another example, if
you pack four 32-bit numbers into two 64-bit words, you can compute
their pairwise AND, OR, XOR, etc. in one operation. You can get a lot
cleverer than this, even creating custom algorithms that depend on
packing data into 64-bit numbers.

6. They can make accidental overflow in C programs less likely, since
the "int" type is liable to be 64 bits wide on such a machine (although
this is no substitute for overflow checking). Alternatively, you can use
the extra bits as a fast way of detecting overflow of 32-bit quantities,
in lieu of an overflow flag (or a way of checking it).

That's a few things, and they may impact your app indirectly, but by and
large it's more likely 64-bit machines will *break* your program than
make it faster. Be careful and never assume a certain word size, even
implicitly. In particular, don't serialize integers' bit patterns
directly to/from memory.
 
T

The little lost angel

Nope, it's only 32 "Moore-intervals" bigger. Moore is logarithmic too.

In other words, instead of being 4,194,304K steps up, it's only 32
Moore steps? ;)

p.s. sorry can't help it :pPpP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
T

The little lost angel

Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).

Don't think so this is universal, Yousuf. I remember some years back
getting rather confused trying to figure out some programming stuff
where some the docs I found kept refering to a "word" and threw in
32bit along the way.
--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
 
C

Christoph Nahr

Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).

Yeah, it's become common usage to refer to 16 bits as a "word" but
originally the "word size" of a CPU means the width of its data and/or
address registers. The terminology kind of ossified in the 16-bit
days, hence the usage of "word" == 16 bits has stuck...
 
G

GSV Three Minds in a Can

from the said:
Of course they aren't. Moore isn't either. Every bit doubles the
address space. Moore's "law" says that transistors (thus memory cells)
double every 18 months. Thus address bits are linear with time
(.67bits/year), if you believe Moore.

I don't - not to the extent of another 64 steps, anyway.

Oh, your not a believer in Moore. Tsk, tsk.

Nope, I worked in the SC industry for 25 years. I met the man, once.
However it was a good heuristic for a while. There are no log growth
curves that go on forever .. infact Moore's law has just about run out.

No reason for it to be larger at all. No reason for it to be slower
either.

Depends on the machine architecture .. something sufficiently optimised
for 64 bits may well run 32 or 16 or 8 bit code slower. There is also
likely to be some interesting new cr&p headers in the binary which says
'the following is 32 bit code'. If it isn't 32 bit code, then you can
assume that the instructions got longer, and everything is now 8-byte
aligned.

Go look at what a 'hello world' looks like now, vs the 8086 machine
code (.com) version, and tell me it ain't larger. (I'd allow as how it
is faster!)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top