Intel's agreement with the FTC

R

Robert Myers

It hasn't said that you need to keep the slots around, just the bus.
That means GPUs can be soldiered onto motherboards using PCIe lines
directly.

I *knew* you'd say that. Let's see what happens.

Robert.
 
R

Robert Myers

What this is supposed to be a form of derision from you? News is always
about yesterday's news.

Even optical interconnects are yesterday's news. Why not just wait for
quantum interconnects?

But you were just telling me that optical interconnects wouldn't
happen for ten years. How could that be yesterday's news?

Let's put it this way. AMD and nVidia have just built the Maginot
Line of computer technology, and you are offering tours.

Robert.
 
Y

Yousuf Khan

I *knew* you'd say that. Let's see what happens.

Robert.

That's the way discrete graphics in laptops are done anyways. Have you
ever seen a video card for laptops, either from ATI or Nvidia? The
mobile video "cards" are really just part of the motherboard. Plus Atom
systems will still need PCIe lines, because all modern PC-Card (formerly
PCMCIA) peripherals are direct extensions of the PCIe interfaces.

Yousuf Khan
 
Y

Yousuf Khan

But you were just telling me that optical interconnects wouldn't
happen for ten years. How could that be yesterday's news?

At some point everything is yesterday's news compared to some other news.
Let's put it this way. AMD and nVidia have just built the Maginot
Line of computer technology, and you are offering tours.

Speaking of yesterday's news.

Yousuf Khan
 
R

Robert Myers

At some point everything is yesterday's news compared to some other news.


Speaking of yesterday's news.

And yesterday's wars. Maginot Line was ineffective because it
prepared for a war that was already over.

Robert.
 
Y

Yousuf Khan

And yesterday's wars. Maginot Line was ineffective because it
prepared for a war that was already over.

I'll agree with part of that historical sentiment. The PCIe ruling was
mainly a sop to Nvidia because Intel was crippling the performance of
Nvidia GPUs within its latest PCIe chipsets. That's basically just a
little skirmish in a long drawn-out, multi-front war. It's a battle that
might have already finished, for all we know. However, unlike the case
of the WW2-era French Maginot Line, which was a lesson learned from a
previous major war, but this lesson made France complacent about its
defenses, this thing does the opposite. It takes a lesson from a
previous minor skirmish and completely surrounds and shackles Intel. In
other words, it's the reverse of the Maginot Line, it is an
over-reaction against Intel. As you said, Intel is now obligated to keep
carrying PCIe for several more years (which it probably would've done
anyways), but now it must clear its changes with its rivals (which it
would've never done).

Yousuf Khan

***

"Section V. is one of the most interesting, it puts some serious
handcuffs on Intel. All while forcing them to dig a hole deep enough for
light not to reach the bottom. And sit there. Smiling. What V. says is
that any time Intel makes a change, basically any change, that degrades
the performance of another competitor, Intel has to prove that it was
done for technically beneficial reasons.

Remember the part about PCIe changes that allegedly hamstrung Nvidia
GPUs? Well, if that happens again, the burden of proof is now on Intel
to show why they did it. Mother hen is getting jittery from all that Red
Bull, and is looking for someone to hit. Hard. Intel has to climb out of
the hole, feed the hen Valium, and then dance. Fast. And look pretty
while doing it, or WHAM."
http://www.semiaccurate.com/2010/08/06/more-intel-dirt-cleaned-ftc/
 
R

Robert Myers

I'll agree with part of that historical sentiment. The PCIe ruling was
mainly a sop to Nvidia because Intel was crippling the performance of
Nvidia GPUs within its latest PCIe chipsets. That's basically just a
little skirmish in a long drawn-out, multi-front war. It's a battle that
might have already finished, for all we know. However, unlike the case
of the WW2-era French Maginot Line, which was a lesson learned from a
previous major war, but this lesson made France complacent about its
defenses, this thing does the opposite. It takes a lesson from a
previous minor skirmish and completely surrounds and shackles Intel. In
other words, it's the reverse of the Maginot Line, it is an
over-reaction against Intel. As you said, Intel is now obligated to keep
carrying PCIe for several more years (which it probably would've done
anyways), but now it must clear its changes with its rivals (which it
would've never done).

        Yousuf Khan

***

"Section V. is one of the most interesting, it puts some serious
handcuffs on Intel. All while forcing them to dig a hole deep enough for
light not to reach the bottom. And sit there. Smiling. What V. says is
that any time Intel makes a change, basically any change, that degrades
the performance of another competitor, Intel has to prove that it was
done for technically beneficial reasons.

Remember the part about PCIe changes that allegedly hamstrung Nvidia
GPUs? Well, if that happens again, the burden of proof is now on Intel
to show why they did it. Mother hen is getting jittery from all that Red
Bull, and is looking for someone to hit. Hard. Intel has to climb out of
the hole, feed the hen Valium, and then dance. Fast. And look pretty
while doing it, or WHAM."http://www.semiaccurate.com/2010/08/06/more-intel-dirt-cleaned-ftc/

The irony of all of this, Yousuf, is that you wouldn't even have this
playground if it weren't for the aggressive behavior of two upstart
monopolists: Microsoft and Intel. IBM, the once-invincible
monopolist, never saw it coming. IBM survived, but it almost didn't.

If it can happen once, it can and almost certainly will happen again.
Maybe the mass market for uber expensive PC's will dry up, and the
future is ARM and Ubuntu. Maybe the server space and even HPC will
become dominated by specialized CPU's that only do some jobs
exceedingly well and others not at all. Right now, the business is
sufficiently capital and research intensive that it favors
monopolists, but the technology is maturing and on its way to being
commoditized.

Anyone who has observed all this from beginning to end, watching
companies come and go like fireflies flickering in the night, has to
realize that everything is temporary. The interesting question for
someone with such a perspective isn't what fleas like the FTC will do
next, but from which bush the next pit bull will leap out. "I always
say," Caligula opines in I, Claudius, "find a dog who'll eat a bigger
dog."

The bigger dog will come, even if no one knows from where or when. In
the meantime, the sob stories of also-rans just aren't that
interesting.

Robert.
 
J

Joe Pfeiffer

Yousuf Khan said:
That's the way discrete graphics in laptops are done anyways. Have you
ever seen a video card for laptops, either from ATI or Nvidia? The
mobile video "cards" are really just part of the motherboard. Plus
Atom systems will still need PCIe lines, because all modern PC-Card
(formerly PCMCIA) peripherals are direct extensions of the PCIe
interfaces.

I thought PC Card was PCI, ExpressCard (which I've never actually seen
in real life) was PCIe?
 
I

Intel Guy

Joe said:
I thought PC Card was PCI, ExpressCard (which I've never actually
seen in real life) was PCIe?

If you've handled a video card made during the past 3 or 4 years, you've
handled a PCIe card.

http://en.wikipedia.org/wiki/PCIe

Not to be confused with PCI-x

http://en.wikipedia.org/wiki/PCI-X

ExpressCard is a replacement for the PCMCIA or CardBus format:

http://en.wikipedia.org/wiki/Express_card

One of the needs that fostered the development of PCI-X seemed to be
giga-bit LAN cards. But there are plenty of conventional PCI giga-bit
lan cards these days, so why was PCI-X needed for that?
 
J

Joe Pfeiffer

Intel Guy said:
If you've handled a video card made during the past 3 or 4 years, you've
handled a PCIe card.

It's ExpressCard I don't think I've ever seen in real life.
 
D

daytripper

One of the needs that fostered the development of PCI-X seemed to be
giga-bit LAN cards. But there are plenty of conventional PCI giga-bit
lan cards these days, so why was PCI-X needed for that?

It wasn't - and isn't - for a single channel. But that's just one perspective.

PCI-X was developed primarily for servers - which is why you never saw much of
it in the desktop/deskside space. The evolution of PCI-X - even if just
considering Mode 1 - not only upped the bandwidth ante, it allowed for
multiple devices - like quad enet devices - in a single slot, with multiple
cards per bus, and not totally starve all of them for throughput.

In the same vein, PCI-X made multi-function cards practical (eg: SCSI HA plus
a couple of enet HAs) as total I/O solutions for thinner, slot-bound server
models - like pizza boxen - a paradigm that wouldn't be very productive on
PCI...

Cheers

/daytripper
 
T

Torbjorn Lindgren

Intel Guy said:
One of the needs that fostered the development of PCI-X seemed to be
giga-bit LAN cards. But there are plenty of conventional PCI giga-bit
lan cards these days, so why was PCI-X needed for that?

PCI is 133 MB/S *theoretical*, but in practice it's more like 90-100
MB/s on the BEST chipset on expensive servers, it was also usually
shared between all at least several slots... Desktops was more likely
shared between all slots and the PCI bus didn't go above 60-80 MB/s.

A single gigabit ethernet maxes out at about 240 MB/s for full-duplex
(125+125MB/s, minus overhead), and you have a big bandwidth shortfall
(60-90 MB/s << 240+) even with gigabit network cards on a dedicated
PCI bus.

It's worth noting that this is actually noticeable enough that before
PCI-e came out many onboard gigabit network cards used a local buss to
avoid having to run over PCI... Likewise, the built-in P-ATA/S-ATA
controller was directly on the Southbridge chip and thus also had
faster connectivity.

These aren't servers or high-end workstations I'm talking, this was
run of the mill consumer desktops (all did it because all chipsets had
versions of this).

Nowadays either the CPU or Northbridge provides a signficant number of
PCI-e "lanes" which are then handed out as needed. Even a single lane
PCI-e 1.0 is much faster than gigabit ethernet, but for USB 3.0 or
SATA 3.0 they may need more than that...

There's many other sources of data that also easily overwhelms a PCI
bus, there's single physical disks that does, never mind a bunch of
them on a RAID controller or SSD disk(s).

As an example a single 4-port SATA 3.0 controller would need 2400 MB/s
of bandwidth (worst case, all in one direction) to guarantee not
bottleneck something prematurely, that corresponds to 4.8 PCI-e 2.0
lanes, in practice 4 lanes is probably enough and I could see 2 being
used in low-end configurations.

Nowadays if you have PCI slots they're likely bridged from PCI-e, so
it's both faster than old-style desktop PCI and not shared between
slots.
 
R

Rick Jones

In comp.sys.intel Intel Guy said:
One of the needs that fostered the development of PCI-X seemed to be
giga-bit LAN cards. But there are plenty of conventional PCI giga-bit
lan cards these days, so why was PCI-X needed for that?

Conventional PCI was insufficient for more than GbE. Dual-port just
fit (handwaving) but once FC went 2Gbit, then 4 and once 10GbE
appeared, PCI didn't have the bandwidth. PCI-X 133 was good to about
7 Gbit/s so more or less OK for a first generation 10GbE interface.
PCI-X 266 could give you link-rate in one direction but would not give
you link-rate in both directions, nor satisfy dual-port 10GbE (or
8Gbit FC).

I've probably had a few handwaving math errors, but it should give a
flavor.

rick jones
 
B

Bill Davidsen

Robert said:
One of the ironies here is that if Intel *did* keep prices
"artificially high," it would have benefited AMD, who has a hard time
selling chips at a profit.
If Intel were to sell chips at a lower profit for just a few years I think AMD
would vanish.
As to good news for me, I don't see any. A regulatory tax on Intel's
business. More obstacles to innovation. Holding on to PCI-X is *not*
good news.
Having 5-6 kinds of slots in common use isn't great sense, either.
 
B

Bill Davidsen

Yousuf said:
Actually, as I remember it, PCI-e was foisted on the consumers to avoid
them adopting AMD's Hypertransport as a standard. When AMD developed HT,
Intel had no answer to it for nearly 8 years. So it threw the
red-herring of a next generation, serial PCI in as the answer. AMD
didn't object, as it wasn't really a competitor to HT, and AMD itself
could use it. Video cards that could connect directly through HT
would've actually been much faster than PCI-e or AGP, since there would
a much smaller overhead, but it would've been proprietary to only AMD
systems as Intel would've never adopted it, even if it was free.
Intel knows when to go the way blows, look at x86_64 vs. Itanium.
No, that bothers me, because no one else is forced to use it when the next thing
comes along.
 
B

Bob Willard

Bill said:
If Intel were to sell chips at a lower profit for just a few years I
think AMD would vanish.

And, if AMD vanished, the EU and the US DoJ would attack Intel as a
monopoly. Intel needs AMD alive, but preferably on life-support.
 
R

Robert Myers

And, if AMD vanished, the EU and the US DoJ would attack Intel as a
monopoly.  Intel needs AMD alive, but preferably on life-support.

I think Intel expected its ultimate competitor to be IBM.

Intel had the financial wherewithal to starve AMD out of existence,
but, as you point out, then it *would* have had serious problems.

The strategy was to move the battle from Intel x86 vs AMD x86 to
Itanium vs.Power. Didn't work out that way, of course, but, in that
scenario, AMD would have been dispensable.

In any case, the idea that keeping prices "artificially high" harmed
competition is too laughable to repeat. We still have to endure to
all this brouhaha, no matter how ridiculous at its foundation.

Robert.
 
R

Robert Redelmeier

In comp.sys.ibm.pc.hardware.chips Robert Myers said:
In any case, the idea that keeping prices "artificially high" harmed
competition is too laughable to repeat. We still have to endure to
all this brouhaha, no matter how ridiculous at its foundation.


Agreed someone is horribly confused -- high prices harm consumers
(DRAM redux?); _low_ [predatory] pricing harms competition.

I don't see much Intel price abuse, nor how any could be proven.
Their exclusivity deals [Dell] are a clear violation. Whether
US Antitrust laws should be so nasty is a separate question.

-- Robert R
 
R

Rick Jones

In comp.sys.intel Robert Myers said:
I think Intel expected its ultimate competitor to be IBM.
Intel had the financial wherewithal to starve AMD out of existence,
but, as you point out, then it *would* have had serious problems.
The strategy was to move the battle from Intel x86 vs AMD x86 to
Itanium vs.Power. Didn't work out that way, of course, but, in that
scenario, AMD would have been dispensable.

Yet, IBM has not gone away. So, Intel is indeed up against both AMD
x86 and IBM Power.

rick jones
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top