Nvidia SLI, SLI's back with a vengeance

F

First of One

This is actually a very good technique. Dividing the screen into two
load-balanced halves means there's no redundant texture memory usage like
3dfx's scanline interleaving, and no mouse lag like ATi's alternate-frame
rendering. Wicked3D had something similar a couple of years ago, but the
immature drivers back then produced a black line between the two image
halves.

The real card to watch for is the 6800GT, which may actually be affordable
in SLI config.
 
J

John Lewis

This is actually a very good technique. Dividing the screen into two
load-balanced halves means there's no redundant texture memory usage like
3dfx's scanline interleaving, and no mouse lag like ATi's alternate-frame
rendering. Wicked3D had something similar a couple of years ago, but the
immature drivers back then produced a black line between the two image
halves.

The real card to watch for is the 6800GT, which may actually be affordable
in SLI config.

Agreed !!

Once you have a PCI-Express chip-set that will support 2 or more
PCI-express sockets. The nforce4 chip-set, still in design at nVidia,
is very likely to do just that. And may for the first time make me
thinkvery seriously about leaving the Intel camp. Athlon 64 FX53
939-pin-- unlocked overclock, plus nForce4, plus dual 6800GT
PCI-express in SLI-configuration; the thought makes me really drool,
( and my pocket-book wilt ).

As far as the enthusiast community goes, Intel has really lost their
way in the past year. Besides power-hungry Prescott, the latest
Intel miss-step is to DELIBERATELY build-in a 10% overclock limit
into the 915/925 chip-sets. Intel has again become arrogant - they
periodically do that until the threat of real competition beats them
over the head.

John Lewis
 
T

Tim

SLIisBACK said:

Smells like desperation to me. Seems like they can't keep up ATI's
technology so they're going for the brute force approach. Ironically,
NVidia criticized 3dfx for the same thing back in the late nineties.
 
C

CapFusion

Smells like desperation to me. Seems like they can't keep up ATI's
technology so they're going for the brute force approach. Ironically,
NVidia criticized 3dfx for the same thing back in the late nineties.

Desperation or not, I do not see anything wrong with this. If they have this
leverage, why not use it? ATi will find some technology or else try to trail
as close it can until they come up with something. This the cruel and brutal
of business. Technology will advance as rival find new way to be better than
the other.

CapFusion,...
 
J

J. Clarke

CapFusion said:
Desperation or not, I do not see anything wrong with this. If they have
this leverage, why not use it? ATi will find some technology or else try
to trail as close it can until they come up with something. This the cruel
and brutal of business. Technology will advance as rival find new way to
be better than the other.

Or the market will look at it and snore.
 
R

Redbrick

Curious...I wonder why they decided to split the rendering horizonatally
as opposed to vertically...seems if they split the rendering vertically
they wouldn't have to bother w/ a separate algorithim to balance the rendering
load in realtime. If I understand the process correctly the rendering is
not split 50/50 but based on the rendering load of a scene/screen...
Furthermore...would this balancing act suckup GPU processing power??

....that could be used to render the scene perhaps???

Just seems to make more sense...perhaps someone can shed some light on
my ignorance here???

Thanks

Redbrick...who Loves his CLK
 
J

John Lewis

Smells like desperation to me. Seems like they can't keep up ATI's
technology

In what way ? Please explain ?

I though that it was nVidia that had overcome the significant
intricacies of a Dx9.0c implementation, but maybe I am
reading the wrong technical literature.
so they're going for the brute force approach.

Not quite. What nvIdia is doing is a simple microcosm for
desktop computers and graphic applications of the shared
processing approach used world-wide by number-
crunching super-computers. nVidia has had the foresight
to implemented the sharing mechanism in their current
silicon. Not exactly a new concept. In a similar domain a
few years ago, I was involved in the design of chips for
time-simultaneous processing of the 3 channels of
component-video (Y, Cr, Cb ) each with a link-port for
accurate synchronization and to coordinate task-
sharing with the other two chips.

John Lewis
Ironically,
NVidia criticized 3dfx for the same thing back in the late nineties.

Yes, solely for marketing reasons, never technical.

John Lewis
 
F

First of One

Unlike the 3dfx VSA-100, nowadays a single 6800 Ultra is competitive with
the X800XT, so this SLI thing is really just a matter of image and bragging
rights. Seriously, 0.5 GB video RAM, 32 textures in a single cycle, four
expansion slots...

The most thumpingly expensive setup is Quadro SLI. Total cost is at least
$5000.
 
J

John Lewis

Unlike the 3dfx VSA-100, nowadays a single 6800 Ultra is competitive with
the X800XT, so this SLI thing is really just a matter of image and bragging
rights. Seriously, 0.5 GB video RAM, 32 textures in a single cycle, four
expansion slots...

The most thumpingly expensive setup is Quadro SLI. Total cost is at least
$5000.

Yep.

Pros pay $1000 where consumers pay $100 for almost the same
thing nowadays in the technology markets.

I do freelance video work and ensure maximum quality for
my capital-equipment-buck by very judiciously mixing pro
and 'high-end-domestic" tools and hardware.

John Lewis
 
A

assaarpa

Curious...I wonder why they decided to split the rendering horizonatally
as opposed to vertically...seems if they split the rendering vertically
they wouldn't have to bother w/ a separate algorithim to balance the rendering
load in realtime.

The only difference from physical point of view is in what order the
framebuffer is stored in memory and it seems less hassle to keep the memory
continuous for rectangular block of display, not split the memory as well as
the display area. This way each GPU could keep own separate framebuffer:
upper and lover half, quickly thinking that would be much less hassle.

If the display is rotated 90 or 270 degrees, I wonder which way the split is
done.. does the split rotate with the display or not? My educated guess is
that it does. :)
If I understand the process correctly the rendering is
not split 50/50 but based on the rendering load of a scene/screen...

Possibly but doing at efficiently requires some thinking. If we adaptively
move the scanline we split at, it means we must either:

- dynamically reserve more memory for scanlines added to current half-screen
- have the memory preallocated

If preallocated, which option?

- some fixed treshold such as 2/3 is allocated for each screen half,
totaling 33% memory waste, or:
- allocate full buffer for both screen halves, totaling 100% memory waste

If the memory was split 50/50 the memory waste would be 0% and no need for
dynamic allocation (which I doubt is done), my (again educated) guess is
that either they split 50/50 (and no dynamic balancing) or 100/100 (dynamic
balancing). Dynamic balancing has a slight problem, when other GPU reclaims
scanlines from the other, the current buffer contents must be copied to the
other GPU's framebuffer so that the buffers contents remain in sync.

The decision is propably better to do before the frame is rendered. Either
they keep copy of all rendering commands for the whole frame, which I doubt:
requires memory, and introduced one frame latency. Most viable way to do
this is to look at previous frame and track the amont of work for each GPU
and decide the splitting based on that information. Simple and would work
pretty nicely, that's what I'd do.

All things considered: dynamic balancing is propably not done intra-frame.

Furthermore...would this balancing act suckup GPU processing power??

No, because the parts of chip doing the work wouldn't know. It would eat
transistors to implement the balancing, which means (not necessarily
significantly) larger chip. Doing this in CPU would be unfeasible as it
would require write-back from vertex programs to know the coverages for each
screen half (since this is done in the GPU the load balancing don't leave
CPU must work). I'm assuming that load balancing is done at all, ofcourse.
:)
...that could be used to render the scene perhaps???

No, because GPU's are not CPU's which execute generic purpose programs. In
GPU things are implemented in functional blocks, which means transistors are
used to implement fixed functionality for a lot of things that are done.
Sampling, scanconversion, clipping and so on and so on. The fragment- and
vertex programs are exception to this because they are in practise programs
which the "shader state machine" runs, but this is a red herring when it
comes to the principles involved here.

Just seems to make more sense...perhaps someone can shed some light on
my ignorance here???

I get the impression that you have the programmer's outlook on the issue at
hand, while it gives the basic tools to understand the algorithms and how
binary logic works you need a perspective shift to think in terms of "how
many gates would that take?" and thinking of the problem in functional
blocks, because that's how the chip designers do. It's not like von neuman
program or anything, it is more like N-dimensional array of gates. The two
main "camps" of design are synchronous and asynchronous logic, I got the
impression that NV would be in favour of synchronous logic but I could be
wrong. But if you approach the problem form this angle it might clear up
things...
 
T

Tim

John Lewis said:
In what way ? Please explain ?

It just seems to me that they can't compete with ATI in their price range so
they're creating a "halo" product to garner prestige for the company. I'm
sure they don't expect to sell too many, it's more for promoting the brand
name anyway.

Video graphics technology is advancing so rapidly that I suspect that this
dual-card solution will be integrated into a
single-card, realistically priced, product long before this version reaches
its own life span.
Yes, solely for marketing reasons, never technical.

Pretty much the same thing do you think? NVidia's marketing department
seemed to try to convince us that 3dfx was inferior for technical reasons.
Also, I have found NVidia's own product descriptions so filled with
marketing jargon and double-talk that I need a third party source just to
understand what they're talking about.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top