Is Itanium the first 64-bit casualty?

B

Bob Niland

I'm sure that PCI-E also fixes the voltage problem
TAS: > It's not that they're uneconomical.
They simply don't exist.

I was going to write that at one time, and discovered
that someone had done it. I no longer have a reference.
The PCI standard does not define universal slots.

Agreed, and it also doesn't define hot-swap slots,
but that hasn't prevented people from implementing
them. If the architecture is slot-per-bus, with
local power management support logic, the connector
can include a sensor for the voltage keyways of the
card, and configure itself. Clumsy. Expensive.
But not impossible.
The intended migration path was ... hosts could switch
to 3.3V signaling. There was never any
intent of offering dual voltage host slots.
...
What has actually happened, ...

I worked for a computer maker that went all-3.3v
"prematurely" and had to add 5V slots back in later.

There's some dissonant stuff out there yet, like
SCSI-160 cards on 133MB/s PCI 5v/32bit. And we're seeing
it repeated with SATA-160 PCI cards that are only 133
at the backplane.
... that consumer boards never even took advantage
of the real no-brainer for PCI performance improvement,
64-bit slots.

Tell me about it. I sold my last PCI-4x SCSI cards on
eBay last year, as it became apparent that no consumer
motherboards were ever likely to get 64b or 66MHz PCI
slots.

Like I said, PCI-Express fixes this, but it might not
have been necessary for a few years yet, if Parallel PCI
had had a more compelling migration plan. USB and FireWire
evidently learned from the PCI experience, and avoided
the trap. Presumably PCI-E can eventually grow link speed
as well.
 
D

daytripper

TAS: > It's not that they're uneconomical.

I was going to write that at one time, and discovered
that someone had done it. I no longer have a reference.


Agreed, and it also doesn't define hot-swap slots,

?
The PCI Hot-Plug Specification was written and published by the PCI SIG and
bears the PCI LocalBus logos and banner...
 
A

assaarpa

I'm aware of that, but I think it's *highly* unlikely in the case of
this particular guy. He had successfully created > 2GB files (and
verified them). I think it's far more likely that the software was
FUBARed or NT was simply preventing > 2Gb mmap.

If "the software" uses *ostream or FILE* handles, it means offsets are
"long" type so indeed they can seek only up to +2GB. CreateFile() and
_open() support 64 bit (signed) offsets and seeks, the "trouble" of using
those are:

- unorthodox / non-standard solution, multi-platform software should roll
own filesystem wrapper to deal with the issue gracefully

- a lot of code is using (naively and incorrectly) "int" type as offset,
this would ofcourse break a lot of code which stores the offset, filesize,
etc. into "int" when it is 32 bit signed integer (very common on IA32
Windows for example :)

So even though NTFS doesn't have trouble with > 2 GB files the mainstream
file i/o APIs on Windows do, you have to use Windows proprietary functions.
If the application doesn't seek or tellp then > 2 GB files are not a problem
with the mainstream APIs, when editing I doubt that is often the case.. what
software was this? Maybe it's time to look for upgrades? ;-)
 
T

Tom Payne

In comp.sys.ibm.pc.hardware.chips Scott Moore said:
I suggest that you read my postings before responding. It is very
clear that you do not understand the issues. I suggest that you
read up about capability machines before continuing.

You even missed my point about the read-only and no-execute bits,
which are in common use today. Modern address spaces ARE segmented,
but only slightly.
[...]
I have heard the arguments over and over and over (and over) again.

Obviously you didn't live through the bad old days of segmentation,
or you would not be avocating it.

I've seen any number of memory architecture admitted the adjective
"segmented". I'd really like to know what we are talking about. Are
we talking segmentation in the sense of:
- Borroughs
- IBM
- Intel
- Multics
- Xerox Sigma Seven
- Intel 432
- etc.?
Segmentation on each of these had subtle and not so subtle differences
in meanings. (And Nick likely has something yet different in mind ;-)

The inevitable fundamental questions are:
- How is swapping handles?
- How is memory protection handled?
- How are linking and sharing handled?
The flat space paradigm is one way to answer all of the above, but
there are interesting alternatives.

Tom Payne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top