best sub $400 CPU

G

gimp

for my new system my cpu budget will be about US$380 (NZ$550), i really
have two Athlon64 procs to choose from, the 4000+ or the X2-3800+.

the pc will be used mainly for 3D animation (single threaded), i don't
do much rendering these days, but occasionaly. gaming performance is
very important :) most of my $$ will be going on a 7800GT or GTX and
1920x1200 display.

at this stage the 4000+ is my preference, it crushes the X2 in gaming,
but i remember when i had a dual-cpu system (even a lowly PIII) the OS
was a bit more response... so i can't decide :/ i've heard the X2-3800+
OCs pretty well, could that run at 2.4GHz and does it shorten the life
of the chip considerably...?
 
D

Daniel

gimp said:
... gaming performance is very important :) most of my $$ will be
going on a 7800GT or GTX ...

Not worth waiting for the ATI R520 / X1800 cards?

I know, I know. They've been ages in coming, yield problems being the
biggest issue.
However, indications are that when they're launched in 4 weeks time it
will be a hard launch, similar to the 7800GTX, with volume availability.
Just thinking it would be annoying to buy a 7800, only to have it drop
in price not long after the R520 is released, or discovering that the
equivalent ATI card completely owns it.
I'm guessing the R520 may well be an overclockers dream (considering
they're built on 90nm technology).
 
D

Dave - Dave.net.nz

Not worth waiting for the ATI R520 / X1800 cards?
I know, I know. They've been ages in coming, yield problems being the
biggest issue. *snip*
I'm guessing the R520 may well be an overclockers dream (considering
they're built on 90nm technology).

wouldn't yeild problems indicate the opposite?
 
D

Daniel

Dave said:
Daniel wrote:

*snip*



wouldn't yeild problems indicate the opposite?

My understanding is that issues were primarily with the number of pipes
that could be successfully tapped out. After their 3rd attempt at taping
out the GPU, they seem to have sorted enough of the issues out to
actually release them. I understand that 16 pixel pipeline versions are
readily available (i.e. best yields), although, I'm not sure if 24pp or
even 32pp (imagine crossfire with those cards ... grrrr) versions will
be available in the first week of October.
As far as clock speed goes, it's 90nm fab process tech, so I'd expect
higher clock speeds (apparently 600-650 MHz stock for top-end R520 GPU
core).
I'd be very interested to see what the power consumption and heat output
numbers for the new ATI cards will be - once those annoying NDAs have
expired of course.
Sooner or later, NVIDIA will have to go 90nm as well or lower, otherwise
ATI will leave them in the dust.
 
T

Tony Hill

for my new system my cpu budget will be about US$380 (NZ$550), i really
have two Athlon64 procs to choose from, the 4000+ or the X2-3800+.

the pc will be used mainly for 3D animation (single threaded), i don't
do much rendering these days, but occasionaly. gaming performance is
very important :) most of my $$ will be going on a 7800GT or GTX and
1920x1200 display.

at this stage the 4000+ is my preference, it crushes the X2 in gaming,

For gaming and any single-threaded applications, the Athlon64 4000+
will definitely be a faster performer than the X2 3800+.
but i remember when i had a dual-cpu system (even a lowly PIII) the OS
was a bit more response...

Yup, this is exactly why if I had $380 to blow on a chip, I would
definitely be getting the 3800+. That being said I don't do much
gaming and a lot of what I do involves either multithreaded apps or
multitasking.
so i can't decide :/ i've heard the X2-3800+
OCs pretty well, could that run at 2.4GHz and does it shorten the life
of the chip considerably...?

I would imagine that the 3800+ would overclock quite well, the lowest
speed grade of any given line of chips usually does. Considering that
the 3800+ and the 4600+ are built using the exact same die, chances of
hitting 2.4GHz are pretty good (though obviously by no means
guaranteed). It shouldn't really have any noticeable effect on the
lifespan on the chip either.

What is a bigger worry with overclocking the processor is that you
often end up overlocking other parts of the system and usually that is
what ends up limiting how high you can clock the chip. There are a
number of websites out there that specialize in overclocking and which
might offer you some decent insights into what you could expect from
the chip.
 
G

gimp

Daniel said:
Not worth waiting for the ATI R520 / X1800 cards?

Just thinking it would be annoying to buy a 7800, only to have it drop
in price not long after the R520 is released, or discovering that the
equivalent ATI card completely owns it.


my 3D app Maya can have issues with ATI drivers unfortunately, they're
probably getting better but the industry [at least with my app] tends
towards nVidia hardware which have been solid with Maya for several
years. but good point RE the possible price drop, i won't buy before
the ATI release anyway.
 
D

Daniel

gimp said:
my 3D app Maya can have issues with ATI drivers unfortunately, they're
probably getting better but the industry [at least with my app] tends
towards nVidia hardware which have been solid with Maya for several
years. but good point RE the possible price drop, i won't buy before
the ATI release anyway.

Cool :)

[...readies to suggest Quadro card, then faints at price...]
 
G

gimp

Tony said:
What is a bigger worry with overclocking the processor is that you
often end up overlocking other parts of the system and usually that is
what ends up limiting how high you can clock the chip. There are a
number of websites out there that specialize in overclocking and which
might offer you some decent insights into what you could expect from
the chip.

thanks for the info :p been doing some googling and apparently people
have clocked it as high as 2.8GHz (!) 2.4 would be enough for me... i
would have to research it more as i've never OC'd and don't want to melt
down the chip/mobo.

i just found this:

http://forums.extremeoverclocking.com/archive/index.php/t-183472.html

probably the X2 is gonna win out i think :)
 
X

XPD

gimp said:
for my new system my cpu budget will be about US$380 (NZ$550), i really
have two Athlon64 procs to choose from, the 4000+ or the X2-3800+.

the pc will be used mainly for 3D animation (single threaded), i don't do
much rendering these days, but occasionaly. gaming performance is very
important :) most of my $$ will be going on a 7800GT or GTX and 1920x1200
display.

at this stage the 4000+ is my preference, it crushes the X2 in gaming, but
i remember when i had a dual-cpu system (even a lowly PIII) the OS was a
bit more response... so i can't decide :/ i've heard the X2-3800+ OCs
pretty well, could that run at 2.4GHz and does it shorten the life of the
chip considerably...?

Im in the same boat as you..... :)
Personally tho, Im going for the X2 - future proofing the system for at
least 6 months ;) (Yeah right)
 
G

GSV Three Minds in a Can

Bitstring <[email protected]>, from the wonderful person gimp
at this stage the 4000+ is my preference, it crushes the X2 in gaming,
but i remember when i had a dual-cpu system (even a lowly PIII) the OS
was a bit more response... so i can't decide :/ i've heard the
X2-3800+ OCs pretty well, could that run at 2.4GHz and does it shorten
the life of the chip considerably...?

Only elevated temperature shortens the life of the chip, and even then
it needs to be (IIRC) about 15c higher to halve the life (from 120 years
to 60! - OK I guessed at the 120, but that's probably the usual design
goal). Merely overclocking, doesn't do any harm (although ramping the
voltage to achieve higher clock speed definitely does reduce lifetimes
too, apart from the extra heat effect).

Chip power = constant * frequency * voltage * voltage .. as you can see,
ramping the (VCore) voltage has more effect than ramping the clock, but
still not a big issue if you can get the power away, and keep the chip
cool (and remember the AMD spec says 'it'll work for X years with the
core temperature at 80c' .. so you're probably already running well
inside the window.

Get the dual Core chip - =current= games might work better on the 4000+,
but game designers know how to code dual (or quad, or more) threaded
games, so future games may run lots nicer on a dual core chip - and even
for current games you'll at least be able to have all the WinXP OS cr&p
happening in the other CPU (which is significant sometimes).

I guess you could just stick in a XP3x00+ single core chip in now, and
wait for the XP4800+ to come down in price. 8>.
 
D

Daniel

gimp said:
... most of my $$ will be going on a 7800GT or GTX and
1920x1200 display.

Assuming you mean LCD (1920x1200 native res for 23/24" LCDs - AFAIK),
what make & model are looking at?

I've read the Dell 24" LCDs are awesome monitors and compare favourably
with the Apple 24" Cinema LCDs.
Also, read that the Philips 24" LCDs aren't that flash.

Just curious.

Cheers.
 
D

Daniel

GSV said:
... but game designers know how to code dual (or quad, or more) threaded
games...

Really?

Multi-threaded code is difficult enough in a single core CPU, let alone
a multi-core CPU.

I know that multi-core CPUs have been around for a while, but the
instances that I've seen (and worked with) allocate individual
processes/applications to a single CPU (i.e one CPU to many apps), but,
*not* the other way around - one app to many CPUs.
Please note, I'm referring to an app that is specifically written to
work with multiple cores (i.e multi-core aware), and so is therefore
able to avoid deadlock situations *between* cores.
Sure, you can divide some problems and run them in parallel (like how a
modern GPU renders complex 3D scenes). However, these are very specific
types of problems which can be easily segmented.

AMD and Intel were basically forced to go dual core because of the
limitations they encountered with higher clock speeds.

Game developers aren't exactly jumping up and down with joy over the
prospect of developing multi-core games.
Agreed, they'll have to now - particularly with next gen consoles all
using multi-core PowerPC CPUs.

However, to say that "game designers know how to code dual (or quad, or
more) threaded games" in the context of a dual (or multi) core CPU does
seem a little premature at this stage.
 
G

GSV Three Minds in a Can

from the wonderful person said:
Really?

Multi-threaded code is difficult enough in a single core CPU, let alone
a multi-core CPU.

<snip>

Jeez, if you can do it for a 4 CPU workstation, what's the issue doing
it for a dual core CPU? Note I didn't say they were going to do it
=perfectly= and achieve a 2x speedup in the gameplay .... but there's
plenty of stuff that's just dying for some parallel processing (the
AI(s), the UI, the graphics upstream of the Graphics card, etc.)

I thought avoiding deadlocks was a solved problem since Knuth volume
more years ago than I care to count said:
However, to say that "game designers know how to code dual (or quad, or
more) threaded games" in the context of a dual (or multi) core CPU does
seem a little premature at this stage.

Hmm, guess I could make some money teaching courses then, it's not like
it's rocket science or anything. Now I have a few fractals programs
which =are= going to need a bit of rocket science, but hey, that's my
fault for coding them in x87 assembler ...
 
D

Daniel

GSV said:
<snip>

Jeez, if you can do it for a 4 CPU workstation, what's the issue doing
it for a dual core CPU? Note I didn't say they were going to do it
=perfectly= and achieve a 2x speedup in the gameplay .... but there's
plenty of stuff that's just dying for some parallel processing (the
AI(s), the UI, the graphics upstream of the Graphics card, etc.)
Okay, I'll bite.

What multi-CPU aware apps are you using then?

Just because you can run a program across X number of CPU's in a
workstation doesn't make it multi-core aware.

Also, as I said in the original post, there are some problems that can
be easily segmented - and I did mention graphics.

I thought avoiding deadlocks was a solved problem since Knuth volume
<n>, more years ago than I care to count, and that was without the
hardware assistance we get these days ...
Avoiding deadlocks is easy. Doing so efficiently in a realtime
environment is the real trick (we already get enough deadlocks in single
core code - and yes the intention was to avoid a deadlock, but, it still
happens).

I've used shared memory and semaphores for the locking (usual IPC), to
coordinate between multiple processes running on a multi-CPU server. The
"application" in this sense is a combination of disparate processes.
However, neither of these processes "know" they're running on a
multi-CPU system. As far as they're concerned they're only running on a
single CPU. Each process has it's own process space (i.e. it's not
*shared* with the other processes).
Surely, the benefit of a multi-core CPU would be for the cores to
operate on the *same* process/memory space. Otherwise your limited to
the types of problems that both CPUs can work on simultaneously (i.e.
not much benefit vs. a single core CPU).

Hmm, guess I could make some money teaching courses then, it's not like
it's rocket science or anything. Now I have a few fractals programs
which =are= going to need a bit of rocket science, but hey, that's my
fault for coding them in x87 assembler ...
If you've written a multi-threaded single core program in assembler (not
to be confused with multi-tasking - sorry, just being sure), then dude -
surely you'd know the hassles involved in getting all that to work.

Now imagine all those problems multiplied because now you've got to
synchronise across 2 or more CPU cores.

Odd you should be using assembler? Compiler optimizations are pretty
good these days (unless your into deliberately writing obfuscated code
of course).
Or were you just being facetious.
 
D

Daniel

Daniel said:
GSV Three Minds in a Can wrote:

If you've written a multi-threaded single core program in assembler (not
to be confused with multi-tasking - sorry, just being sure), then dude -
surely you'd know the hassles involved in getting all that to work.

Now imagine all those problems multiplied because now you've got to
synchronise across 2 or more CPU cores.

Odd you should be using assembler? Compiler optimizations are pretty
good these days (unless your into deliberately writing obfuscated code
of course).
Or were you just being facetious.

Plus debugging multi-threaded code is a nightmare.

Debugging multi-threaded multi-core code.... yuk.
 
D

Derek Baker

Daniel said:
Okay, I'll bite.

What multi-CPU aware apps are you using then?

Just because you can run a program across X number of CPU's in a
workstation doesn't make it multi-core aware.

Doesn't it? I was under the impression that it had to be multi-threaded to
run on multiple-processors, and that means it will take advantage of
multi-core CPUs.
 
G

GSV Three Minds in a Can

from the wonderful person said:
Okay, I'll bite.

What multi-CPU aware apps are you using then?

None at the moment, because I can't afford a multi CPU workstation to
play with, which is why I've been drooling over the x2 chips for some
time..
Media encoding and rendering are the obvious apps which would soak up
lots of cores/CPUs with ease.

If you've written a multi-threaded single core program in assembler
(not to be confused with multi-tasking - sorry, just being sure), then
dude - surely you'd know the hassles involved in getting all that to
work.

Minor hassles, and certainly no worse than writing interrupt driven OS
code and trying to weave that around the regular applications. Some game
(engine) designers are really smart people (some are just glorified
graphics artists or authors or musicians, which is why the team are now
so huge).

Yeah, debugging is an issue, but hey, these folks can't debug what they
deliver today, so nothing new there.. 8>.
Now imagine all those problems multiplied because now you've got to
synchronise across 2 or more CPU cores.

Odd you should be using assembler? Compiler optimizations are pretty
good these days (unless your into deliberately writing obfuscated code
of course).
Or were you just being facetious.

Nope, there (was) no other way of doing 80bit maths on an x87 and
keeping the (right) 80 bit values in the (right) x87 registers in the
stack at that time. These days maybe you could do as well with SSEn,
although I suspect not.

If you want to 'fly through' a Julia set you can fly a lot deeper (in
reasonable time - i.e. without getting into multi-word arithmetic) with
80 bit operands than you can with 64 bits, before you run into the
pixellation limit (i.e. where adjacent pixels have the =same= floating
point value to N bits, and your picture blacks out). All the C/C++
compilers I looked at didn't really believe in 80bit data values, and
certainly didn't have a clue as to how to leave them in the FPU for the
whole of a scan line.

I've got some code sitting here which ought be able to soak up however
many cores I can afford to throw at it - one thread for the display
(rate limited by how fast the user flies &/or the availability of frame
buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
frame) doing the calculation, (allowing as how you may have to dump
future frames if the user decides to scroll sideways). All one process,
although nobody gets to play with anyone else's frame buffer until it is
'done', and there is no interaction between frame<N> and frame <N+1>
during the calculation phase. Actually there isn't any interaction
between each scan line and the next one IIRC, so I could actually toss
1024 cores at =each= frame buffer (but it isn't coded that way).

Chess plays pretty well on multi-CPU systems of course, an I don't see
why an X2 is going to be any different from a two CPU workstation in
that regard - Fritz<n> should be able to handle it right out of the box.
Not that I'm very excited by that, except for analysis - I already can't
beat Fritz on an XP3000+.

For something like Morrowind, I guess you'd turn most of the processing
power loose on the 'wandering monsters' (and sundry mobile bits of
scenery/weather) which need animating, and where the interaction between
the 'objects' is actually pretty small (and again you can play the 'next
frame, frame after that' trick).

Wait and see .. however history says that game designers have never let
complications of technology stand in the way of consuming all the PC
they can find, and then some - and in 5(?) years time I bet you'll have
trouble buying a single core desktop CPU chip.
 
D

Daniel

Derek said:
Doesn't it? I was under the impression that it had to be multi-threaded to
run on multiple-processors, and that means it will take advantage of
multi-core CPUs.
Your right - it does.

I was incorrect with a few of my assertions and line of thinking.
 
D

Daniel

GSV said:
None at the moment, because I can't afford a multi CPU workstation to
play with, which is why I've been drooling over the x2 chips for some
time..
Media encoding and rendering are the obvious apps which would soak up
lots of cores/CPUs with ease.
My error. Just needs to be multi-threaded.

Minor hassles, and certainly no worse than writing interrupt driven OS
code and trying to weave that around the regular applications. Some game
(engine) designers are really smart people (some are just glorified
graphics artists or authors or musicians, which is why the team are now
so huge).

Yeah, debugging is an issue, but hey, these folks can't debug what they
deliver today, so nothing new there.. 8>.
True.



Nope, there (was) no other way of doing 80bit maths on an x87 and
keeping the (right) 80 bit values in the (right) x87 registers in the
stack at that time. These days maybe you could do as well with SSEn,
although I suspect not.

If you want to 'fly through' a Julia set you can fly a lot deeper (in
reasonable time - i.e. without getting into multi-word arithmetic) with
80 bit operands than you can with 64 bits, before you run into the
pixellation limit (i.e. where adjacent pixels have the =same= floating
point value to N bits, and your picture blacks out). All the C/C++
compilers I looked at didn't really believe in 80bit data values, and
certainly didn't have a clue as to how to leave them in the FPU for the
whole of a scan line.

I've got some code sitting here which ought be able to soak up however
many cores I can afford to throw at it - one thread for the display
(rate limited by how fast the user flies &/or the availability of frame
buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
frame) doing the calculation, (allowing as how you may have to dump
future frames if the user decides to scroll sideways). All one process,
although nobody gets to play with anyone else's frame buffer until it is
'done', and there is no interaction between frame<N> and frame <N+1>
during the calculation phase. Actually there isn't any interaction
between each scan line and the next one IIRC, so I could actually toss
1024 cores at =each= frame buffer (but it isn't coded that way).

Chess plays pretty well on multi-CPU systems of course, an I don't see
why an X2 is going to be any different from a two CPU workstation in
that regard - Fritz<n> should be able to handle it right out of the box.
Not that I'm very excited by that, except for analysis - I already can't
beat Fritz on an XP3000+.
Yes. As long as problems can be segmented in such a way as it makes it
easy for a multi-core CPU to operate on them efficiently, then you get a
significant performance boost (i.e. GPU's with multiple pipelines).

Game engines (for twitch games at least), revolve around very tight
loops. You can certainly offload a number of tasks to run in parallel,
however, the trick is to do so without incurring a significant penalty
during execution (i.e. avoiding CPU cache misses as much as possible).

For something like Morrowind, I guess you'd turn most of the processing
power loose on the 'wandering monsters' (and sundry mobile bits of
scenery/weather) which need animating, and where the interaction between
the 'objects' is actually pretty small (and again you can play the 'next
frame, frame after that' trick).
Yep. Indeed when interaction with other objects is at a minimum, then
processing the surrounding environment perhaps isn't such a big deal.
For turn based games (strategy) like chess, then one would expect a
noticable benefit with multi-cores.
The tricky situations are where there is a lot of interaction going on -
again this is mostly likely for realtime games (team sports, FPS, RTS).

Wait and see .. however history says that game designers have never let
complications of technology stand in the way of consuming all the PC
they can find, and then some - and in 5(?) years time I bet you'll have
trouble buying a single core desktop CPU chip.
As my work colleague says "threads are evil", and life is sooo much
easier without them (google it online - a few interesting links).
I have no doubt, game developers will eventually learn to make the most
of multi-core CPUs.
I agree, I imagine once more programs start appearing that take
advantage of multiple cores then people may ask how we ever made do with
single core CPUs.

However, I still disagree with your initial statement about the current
capability of developers in regards to multi-core CPUs.
 
G

GSV Three Minds in a Can

Bitstring <[email protected]>, from the wonderful person
Daniel said:
However, I still disagree with your initial statement about the current
capability of developers in regards to multi-core CPUs.

I guess we'll just have to agree to disagree .. I'll allow as how they
aren't pushing any such games out the door yet, and as how 80% of the
games design team are technologically clueless, but in there someplace
there are some smart cookies who are quite competent to use multiple
cores (or CPUs) .. and are probably coding already. If not, they're
missing a trick.

Whether it'll add anything significant to the FPS (First Person Shooter
- not Frames/sec) I don't know, but it'll probably make Civilization 5,
or Warlords 4, or whatever, even harder to beat than they are already.
If nothing else, I'll maybe be able to play Morrowind 4 and read mail at
the same time (or maybe not - games tend to be pretty 'selfish', so the
games engine will probably steal all the cores/CPUs it can find).

Anyway, I'm off to shop in the next few weeks .. just waiting for the
X2s to get a teeny weeny bit less expensive, and for the pioneers to
finish debugging the motherboards and BIOSs for me, and the
proliferation of PSU specs to shake out.

Now if only Intel would give AMD some serious competition, we might see
prices come down a bit faster .... hmm, weren't we saying that the other
way round ~4 years ago?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top