What does '64 bit' mean? Lame question, but hear me out :)

R

Robert Myers

On Sun, 23 Jan 2005 11:10:09 -0500, Robert Myers wrote:


All of it? ...and not only the "old" stuff. Mainframes have been
virtualized for decades. ...though perhaps in a slightly different
meaning of "virtualized".

Looking at it another way, I'd propose that most modern processors
are virtualized, incuding x86. The P4/Athlon (and many before) don't
execute the x86 ISA natively, rather "interpret" it to a RISCish
processor.
I take your point, but including microcode stretches the notion of
virtualization too far on one end the way that including tokenized
Basic stretches it too far on the other. I'm too lazy to try to come
up with a bullet-proof definition, but there is a class of virtual
machines that could naturally be implemented in hardware but are
normally implemented in software: p-code, java byte-code, m-code, and
I would put executing 360 instructions on x86 in that class.
Interpreting of x86 to microcode is done in hardware, of course.
MSIL, the intermediate code for .NET, actually does compile to machine
code, apparently, and is not implemented on a virtual machine.

The term "virtualize" is pretty broad. One kind of virtualization,
the kind that vmware does or that I think Power5 servers do virtualize
the processor to its own instruction set, and I expect _that_ kind of
virtualization to become essentially universal for purposes of
security. You get the security and compartmentalization benefits of
that kind of virtualization for free when you do instruction
translation by running on a virtual machine in software.
I don't see it as "better" in any meaning of the word. Java's purpose in
life is to divorce the application from the processor and OS. I can't
see how .net is "better" at this. If platform independance isn't wanted,
why would anyone use Java?

I barely know Java, and c# not at all. c# is reputed to be nicer for
programming.

RM
 
Y

Yousuf Khan

George said:
It's just not real code and it's source is not real software.:) This
abuse of blurring the difference is going too far. What's the point of
faster and faster processors if they just get burdened with more and more
indirection. Neither Java, nor any other language, *has* to produce
interpretive object code.

Such languages have their place and reasons for use -- from security to
laziness, or just toy application -- but to suggest that DLLs, which
already have the burden of symbolic runtime linkage, are now "outdated" is
scarey.

Not sure why you're so married to the concept of DLLs, they had their
purpose a few years ago, they were much better than the static-linked
libraries they replaced because they only were brought into memory only
when they were needed, not all at once at the beginning. But now the
requirement is for code that isn't dependent on underlying processor
architecture, and we have JAVA and .NET. These aren't exactly the same
as the old fashioned interpretted code either, these ones are decoded
only once on the fly and then they exist cached as machine code while
they run.

Yousuf Khan
 
G

George Macdonald

Not sure why you're so married to the concept of DLLs, they had their
purpose a few years ago, they were much better than the static-linked
libraries they replaced because they only were brought into memory only
when they were needed, not all at once at the beginning. But now the
requirement is for code that isn't dependent on underlying processor
architecture, and we have JAVA and .NET. These aren't exactly the same
as the old fashioned interpretted code either, these ones are decoded
only once on the fly and then they exist cached as machine code while
they run.

DLLs are just the way it's done with Windows - nothing to do with being
married to anything; DLLs only got out of hand because of the fluff burden.
What irks me is machine cycles being pissed away on the indirection of
pseudo code. To me any suggestion that you can do serious computing with
this stuff, and do away with real machine code for system level library
functions, is madness.
 
K

keith

I take your point, but including microcode stretches the notion of
virtualization too far on one end the way that including tokenized
Basic stretches it too far on the other. I'm too lazy to try to come
up with a bullet-proof definition,

I understand. It's impossible to catagorize such things because there is
such a continum of architectures that have been tried. However you are
pretty loosy-goosey with your term "virtual". Remember VM/360?

but there is a class of virtual
machines that could naturally be implemented in hardware but are
normally implemented in software: p-code, java byte-code, m-code, and I
would put executing 360 instructions on x86 in that class.

Ok, a better example of your class of "virtualization" would be the 68K on
PPC. I call that emulation, not virtualization. I call what VM/360,
and later, did "virtualization". The processor virtualized itself.

Ok, if you don't like microcode (what is your definitionof "microcode",
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)
Interpreting
of x86 to microcode is done in hardware, of course. MSIL, the
intermediate code for .NET, actually does compile to machine code,
apparently, and is not implemented on a virtual machine.

Ok, what would you call a Java byte-code machine?
The term "virtualize" is pretty broad.

Indeed, but it helps if we all get our terms defined if we're going
to talk about various hardware and feechurs.
One kind of virtualization, the
kind that vmware does or that I think Power5 servers do virtualize the
processor to its own instruction set, and I expect _that_ kind of
virtualization to become essentially universal for purposes of security.

Too bad x86 is soo late to that table. M$ wanted no part of that though.
This brand of virtualizatin would have put them out of business a decade
ago. BTW, I call the widget that allows this brand of "virtualization" a
"hypervisor" (funny, so does IBM ;-).
You get the security and compartmentalization benefits of that kind of
virtualization for free when you do instruction translation by running
on a virtual machine in software.
Free?

I barely know Java, and c# not at all. c# is reputed to be nicer for
programming.

Perhaps, if you want to be forever wedded to Billy.
 
Y

YKhan

George said:
DLLs are just the way it's done with Windows - nothing to do with being
married to anything; DLLs only got out of hand because of the fluff burden.
What irks me is machine cycles being pissed away on the indirection of
pseudo code. To me any suggestion that you can do serious computing with
this stuff, and do away with real machine code for system level library
functions, is madness.

Machine cycles aren't so precious anymore, the software side hasn't
kept up with the developments in the hardware side for quite some time
now. Now's as good a time as any to try out these indirection
techniques. It will more than likely help out in the future as it will
probably mean we're less tied down to one processor achitecture
anymore. Piss a couple of machine cycles for for machine independence?
Sure, sounds good to me.

Yousuf Khan
 
G

George Macdonald

Machine cycles aren't so precious anymore, the software side hasn't
kept up with the developments in the hardware side for quite some time
now. Now's as good a time as any to try out these indirection
techniques. It will more than likely help out in the future as it will
probably mean we're less tied down to one processor achitecture
anymore. Piss a couple of machine cycles for for machine independence?
Sure, sounds good to me.

But it's not a couple of machine cycles - it bogs the whole thing. If you
restrict it to a user interface, where the enduser is allowed to talk to
the system through this clunker you *might* be able to get away with it. I
stress *serious* work here - the core load of the "system" (OS + services +
app). This stuff has already been tried at various levels: from Alpha to
Transmeta... it doesn't work to err, satisfaction.

Given that we are at the wrong end of the exponential slope of hardware
scaling, machine cycles are likely to become more precious.:)
 
R

Robert Myers

On Sun, 23 Jan 2005 15:47:09 -0500, Robert Myers wrote:

Ok, a better example of your class of "virtualization" would be the 68K on
PPC. I call that emulation, not virtualization. I call what VM/360,
and later, did "virtualization". The processor virtualized itself.

Ok, if you don't like microcode (what is your definitionof "microcode",
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)

The definition game just isn't very much fun. Emulation is one
processor pretending to be another. Virtualization is when you pull
the "machine" interface loose from the hardware so that the machine
you are interacting with has state that is independent of the physical
hardware. That's why I don't want to call microcode virtualization.
Ok, what would you call a Java byte-code machine?


Indeed, but it helps if we all get our terms defined if we're going
to talk about various hardware and feechurs.


Too bad x86 is soo late to that table. M$ wanted no part of that though.
This brand of virtualizatin would have put them out of business a decade
ago. BTW, I call the widget that allows this brand of "virtualization" a
"hypervisor" (funny, so does IBM ;-).
You may think that kind of virtualization should belong to IBM, and
you may be right, but I don't expect to see hypervisor used as
anything but a proprietary IBM marketing term.
The hard part is pulling the virtual processor loose from the
underlying hardware. Once the state of your "machine" is separate
from hardware, you can examine it, manipulate it, duplicate it, keep
it from being hijacked,...all without fear of unintentionally
interfering with the operation of the machine. If you're trying to
emulate one processor on another, the virtual processor is
automatically separated from the hardware.
Perhaps, if you want to be forever wedded to Billy.

The long-term fate of Mega$loth will be interesting to watch. They
will accomplish the customer-in-legirons routine that IBM tried but
ultimately failed at? I'm doubting it, just like I'm doubting that
x86 is forever.

RM
 
N

Niki Estner

Yousuf Khan said:
...
Not sure why you're so married to the concept of DLLs, they had their
purpose a few years ago, they were much better than the static-linked
libraries they replaced because they only were brought into memory only
when they were needed, not all at once at the beginning.

???
Windows always loads code when it's needed, doesn't make a difference if
it's in a DLL or not. Executable files (that's EXE's and DLL's) are
memory-mapped, and loaded to main memory on first access. Also, DLL's didn't
replace static libraries. Both concepts are commonly used in unmanaged
programs.
But now the requirement is for code that isn't dependent on underlying
processor architecture,

That requirement has been there for ages. In fact, it's one of the reasons
why high-level programming languages (like C) were created.
and we have JAVA and .NET. These aren't exactly the same as the old
fashioned interpretted code either, these ones are decoded only once on
the fly and then they exist cached as machine code while they run.

Note that this is not generally true for Java VMs. The Sun VM for example
interprets code in the beginning and later compiles code that's used
frequently, to reduce loading times (JITing something like Swing would be
overkill).

Niki
 
R

Robert Myers

That requirement has been there for ages. In fact, it's one of the reasons
why high-level programming languages (like C) were created.

Oh, us old Fortran programmers only wish. c, as it is commonly used,
is really a portable assembler. The hardware dependence is wedged in
with all kinds of incomprehensible header files and conditional
compilation. What universe do you live in that you never run into
header file weirdness that corresponds to a hardware dependency?

RM
 
K

keith

The definition game just isn't very much fun.

Fine, but it does help to have everyone use the same terms.
Communication, and all that crap...
Emulation is one processor pretending to be another.

In hardware od software? Microcode? What about Transmeta?
Virtualization is when you pull
the "machine" interface loose from the hardware so that the machine
you are interacting with has state that is independent of the physical
hardware. That's why I don't want to call microcode virtualization.

I don't see how this definiton is different than "emulation". ...or
microcode, for that matter. I see it all as a different level of
indirection.

What I call self-virtualization is interesting though (i.e. VM/370, and
such).

You may think that kind of virtualization should belong to IBM, and you
may be right, but I don't expect to see hypervisor used as anything but
a proprietary IBM marketing term.

Nope, I want to know what *your* definition is this differs from
interpretation, emulation, and microcode. Communication, and all...

It is true that IBM has had its hand in virtualization forever (and in
many meanings of the term), but I don't see your definitions as any
different than the other indirections I've mentioned above.
The hard part is pulling the virtual processor loose from the underlying
hardware. Once the state of your "machine" is separate from hardware,
you can examine it, manipulate it, duplicate it, keep it from being
hijacked,...all without fear of unintentionally interfering with the
operation of the machine. If you're trying to emulate one processor on
another, the virtual processor is automatically separated from the
hardware.

I don't have any issue with you here. Though I can't alter the state of
a microcoded state machine from user space either, other than though the
architeced interface. Again, I don't see the big difference.

Would you consider Transmeta's processors "virtualized"? If not, why not?
The long-term fate of Mega$loth will be interesting to watch. They will
accomplish the customer-in-legirons routine that IBM tried but
ultimately failed at? I'm doubting it, just like I'm doubting that x86
is forever.

I don't think anything is forever, but I do know that even S/360 is still
around and making much money. Wintel may be slain at some point, but I
don't pretend to know what will drive the nail. I don't think it's
anything we've yet seen, though I'd *love* to be proven wrong. OTOH, the
whole market may implode and I'd rather not see that, thoug M$ is trying
hard to piss off as many as possible.
 
K

keith

I don't - not to the extent of another 64 steps, anyway.

I'm only talking abotu 32 more. It's not like it's a hundred years. ;-)
Nope, I worked in the SC industry for 25 years. I met the man, once.
However it was a good heuristic for a while. There are no log growth
curves that go on forever .. infact Moore's law has just about run out.

Grins! I've been around for 1F. I've heard that Moore was done so many
times the predictions of doom aren't interesting anymore. The extremes to
which people/industry will go to make Moore a prophet are though. Fifteen
years ago IBM invested $1B (and not in 2005 dollars) for a synchrotron to
do X-Ray lithography because optical lithography was "limited" by the
wavelength of light. Oops.

Do I honestly thing Moor will last another 32 generations? No, of course
not. Do I think we're done now? Don't be silly. There are still
breakthroughs to be had. 90nm is a speedbump, but I see it as pretty much
that. Is 32b dead? As far as I'm concerend, absolutely! 64b is here and
it's *sheap*. Why not?
Depends on the machine architecture ..

That's like saying; "it depends on what the meaning of "is" is.
something sufficiently optimised
for 64 bits may well run 32 or 16 or 8 bit code slower. There is also
likely to be some interesting new cr&p headers in the binary which says
'the following is 32 bit code'.

Why? I don't see an reason for any significant information here.
If it isn't 32 bit code, then you can
assume that the instructions got longer, and everything is now 8-byte
aligned.

Bad ASSumptions.
Go look at what a 'hello world' looks like now, vs the 8086 machine
code (.com) version, and tell me it ain't larger. (I'd allow as how it
is faster!)

Define "faster" by clocks? Wall or oscillator (and which one?)
 
N

Niki Estner

Robert Myers said:
Oh, us old Fortran programmers only wish. c, as it is commonly used,
is really a portable assembler. The hardware dependence is wedged in
with all kinds of incomprehensible header files and conditional
compilation. What universe do you live in that you never run into
header file weirdness that corresponds to a hardware dependency?

I said the requirement was there, I didn't say it was fulfilled... The post
before sounded like this was a brand new wish, and Java/.Net were the first
ones trying to solve it. They weren't. And, they didn't. Ever tried to make
an AWT-Applet run on multiple Java VM's?

Niki
 
R

Robert Myers

I said the requirement was there, I didn't say it was fulfilled... The post
before sounded like this was a brand new wish, and Java/.Net were the first
ones trying to solve it. They weren't. And, they didn't. Ever tried to make
an AWT-Applet run on multiple Java VM's?

I don't do enough with Java to know if it is any improvement at all in
terms of portability and reusability. My take is that it isn't.

In theory, though, a virtual machine solves one class of portability
problems by presenting a consistent "hardware" interface, no matter
what the actual hardware. In practice, if Sun keeps mucking around
with the runtime environment, you hardly notice that advantage.

RM
 
R

Robert Myers

On Mon, 24 Jan 2005 08:17:27 -0500, Robert Myers wrote:



I don't see how this definiton is different than "emulation". ...or
microcode, for that matter. I see it all as a different level of
indirection.

Microcode makes the machine state independent of the physical
hardware? Nah.

Emulation always vitualizes. Whether you want to say that a processor
that is virtualized by one means or another to multiple instances of
itself is emulating itself is a choice of language. I'd rather keep
emulation for circumstances where one processor is pretending to be
another. What about an x86 emulator running on x86? This is really
boring stuff to be spending time on.
What I call self-virtualization is interesting though (i.e. VM/370, and
such).



I don't have any issue with you here. Though I can't alter the state of
a microcoded state machine from user space either, other than though the
architeced interface. Again, I don't see the big difference.

Would you consider Transmeta's processors "virtualized"? If not, why not?

The Transmeta (they are getting out of the business I hear) processor
is no more virtualized than x86 with microcode, at least from a user's
point of view. As far as I know, you can't get at any of the internal
hardware hooks. IBM's strategy (which I assume is going to be Intel's
strategy as well, maybe somebody can educate me) is to create hardware
hooks that a user with sufficient privilege can get at to facilitate
the illusion of separate processors.

I don't think anything is forever, but I do know that even S/360 is still
around and making much money. Wintel may be slain at some point, but I
don't pretend to know what will drive the nail. I don't think it's
anything we've yet seen, though I'd *love* to be proven wrong. OTOH, the
whole market may implode and I'd rather not see that, thoug M$ is trying
hard to piss off as many as possible.

I expect the entire programming model to change. Stream processors,
GPU's, network processors, packet processors in the place of
conventional microprocessors.

x86, s/360 forever? Of course. That huge pile of software would cost
alot of money to recreate. It may not even be possible without
causing the world economy to collapse.

RM
 
N

Niki Estner

Robert Myers said:
I don't do enough with Java to know if it is any improvement at all in
terms of portability and reusability. My take is that it isn't.

In theory, though, a virtual machine solves one class of portability
problems by presenting a consistent "hardware" interface, no matter
what the actual hardware. In practice, if Sun keeps mucking around
with the runtime environment, you hardly notice that advantage.

That's exactly what high level languages with standard libraries have tried
to do for years, too. Unfortunately, compiler/library builders don't
implement everything in the standards, which leads to headers containing
more #ifdef's than code, resp to applet classes containing 3 different
layout, one for each VM...

Niki
 
K

keith

Microcode makes the machine state independent of the physical
hardware? Nah.

That depends on your view of the "soul of the machine". To me, a hardware
type, this is indirection at least, and thus virtualization of the
hardware. To a soft-weenie, the ISA (indeed the language) is king, so
perhaps you have a different opinion. ;-)
Emulation always vitualizes. Whether you want to say that a processor
that is virtualized by one means or another to multiple instances of
itself is emulating itself is a choice of language.

Well, that's where we are. The semantic lines blur quickly as technology
progresses.
I'd rather keep
emulation for circumstances where one processor is pretending to be
another. What about an x86 emulator running on x86? This is really
boring stuff to be spending time on.

Ok, forget emulation (though again there isn't a thick-black line, IMO).
What about virtualization, which was the point, IIRC.

The Transmeta (they are getting out of the business I hear)

Irrelevant. We wuz talking technology, not business (I wouldn't given you
a nickel for their business when they were an enigma).
processor is
no more virtualized than x86 with microcode, at least from a user's
point of view.

I'm not a user. I'm a hardware weenie. ;-)
As far as I know, you can't get at any of the internal hardware hooks.

Teh user cannot, no. That doesn't change the microarchitecture.
IBM's strategy (which I assume is going to be Intel's
strategy as well, maybe somebody can educate me) is to create hardware
hooks that a user with sufficient privilege can get at to facilitate the
illusion of separate processors.

Sure, to different degrees. VM/360 (et. al.) virtualized the entire
system such that you could re-virtualize it again under VM/360 (at least
it worked on the 370s). The PowerPCs add a layre of indirection
to "protected mode" by having a hypervisor state that lives above.
This is obviously a lesser "virtualiation".
I expect the entire programming model to change. Stream processors,
GPU's, network processors, packet processors in the place of
conventional microprocessors.

You love them "stream processors"! You really ought to get into
programming DSPs, though that would take you into the world of real
problems. ;-)
x86, s/360 forever? Of course. That huge pile of software would cost
alot of money to recreate. It may not even be possible without causing
the world economy to collapse.

Exactly the point I've been making here for *years*. I learned this
lesson 30 years ago with FS. I wonder if Intel has learned this
lesson yet! Good ideas are quite often found to be not so wonderful.
 
P

Peter Wone

BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)

A VM is an emulator for something that never actually existed.

Personally I think that VMs are a great idea because
(a) you can manufacture them on demand
(b) process isolation
(c) after a decade of settling, the mature VM can be realised with hardware.
(The antonuym for virtualise would be realise, I should think.)

It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.
 
K

keith

A VM is an emulator for something that never actually existed.

Define "already existed". VM/360 virtualized the S/360 ISA, using the
S/360 ISA. The S/360 certainly existed before VM/360.

Power4/5 has a hypervisor layer that "virtualizes" the PPC pretected mode
environment, which certainly existed before the hypervisor was added to
the architecture.

Personally I think that VMs are a great idea because

It would help to know what you mean here.


(a) you can manufacture them on demand

You can manufacture "threads" on demand too. Some virtualization is
limited in the "on demand" arena too. PR/SM only allowed seven instances
on the 390 (15 later) images (whouch could be VM, with any number of OS
images under that).

(b) process isolation

NO question, done right. Not that getting a security rating is a small
feat here. Interprocess signaling is the big bugaboo.

(c) after a decade of settling, the mature VM can be realised with hardware. (The antonuym for
virtualise would be realise, I should think.)

The fact is that the process is the opposite. Mature hardware is
virtualized to add functionality.
It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.

Ginev that VM360 (CP67) is at *least* 35 years old, I'd guess you're on
the right track. There has been a *ton* of work in this area for much
time.
 
J

Jerry Peters

In comp.sys.ibm.pc.hardware.chips Peter Wone said:
A VM is an emulator for something that never actually existed.

Personally I think that VMs are a great idea because
(a) you can manufacture them on demand
(b) process isolation
(c) after a decade of settling, the mature VM can be realised with hardware.
(The antonuym for virtualise would be realise, I should think.)

It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.
AFAIK all of the modern IBM mainframes support LPAR (logical
partition) mode. This allows you to run multiple machine images
on one physical machine, LPAR's have been available on IBM's
mainframes since sometime in the 80's, IIRC.

Jerry
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top