Intel, AMD diverge on multicore strategies

muckshifter

I'm not weird, I'm a limited edition.
Moderator
Joined
Mar 5, 2002
Messages
25,739
Reaction score
1,204
Intel, AMD diverge on multicore strategies
By Ed Sperling, Editor in Chief -- Electronic News, 23/5/2007

Intel and AMD, which for years have played a game a leapfrog when it comes to x86 processor technology, are now headed in radically different directions.

The divergence lies in the multicore strategy each is adopting. Intel is favoring homogeneous cores, while AMD is opting for heterogeneous cores. And while this may seem like a rather fine technology distinction, the effect on efficiency and the price of chips could be significant.

James Held, Intel fellow and director of Many-Core Research, said the advantages of a heterogeneous approach—different size cores for different dedicated functions—may have some advantages in the short-term, but he said that also limits the flexibility of resources.

“We’re doing everything we can to do homogeneous cores,” Held said. “For a more general platform, the more you specialize the more concern there is about adequate utilization.”

AMD disagrees. Phil Hester, AMD’s chief technology officer, outlined the company’s strategy during a recent roundtable discussion with Electronic News/Electronic Business, saying that because applications are becoming diverse in the client space, homogeneous multicores are not a viable solution.

“It’s going to be heterogeneous multicores,” Hester said. “If you look at software stacks, Vista is going to make 3D graphics standard the way floating point became standard in the 486 generation. The first heterogeneous multicore will be a CPU (central processing unit) and GPU (graphics processing unit) in the client space.”

He added that each core will be “autonomous enough to figure out what’s running on it to be able to adjust its power level up and down in hardware.”

While neither approach solves the issue of writing software that works across multiple cores, which has been a nagging problem in the consumer and personal computing world, the focus now has shifted from running one application faster to running multiple applications more efficiently. Tom Halfhill, senior analyst at In-Stat, believes the heterogeneous core approach is the more efficient of the two.

“What AMD is doing is taking the ATI graphics core and integrating it with the CPU,” Halfhill said. “Intel doesn’t own a graphics chip company, so they have to have a different solution.”

He said the advantage of a heterogeneous approach is that the core only needs to be as powerful as the application dedicated to run on it. A core might be used, for example, for encryption/decryption or for virus scanning, which would not require the full power of an x86 processor.

All different cores require different instruction sets, and the heterogeneous approach looks even more complicated. But he added that the chips could also do more work in less power, an approach already common in the embedded processor space where the various heterogeneous processors are referred to as engines. He said ultimately Intel may opt for that path—particularly if it develops or buys a graphics chip.

... and to answer your next question ...

In the never-ending quest for more computational power, many in the industry already see the end in site for conventional multi-processor, multi-core architectures. After a while, just adding more processors to a system will have no effect. If a system has more cores than you have application threads, all the extra CPUs just become Lilliputian space heaters.
The heterogenous approach offers greater efficiency by using specialized processing engines that can be matched more closely with different types of application code. A specialized chip, such as a GPU, an FPGA or a vector processor, can replace 100 conventional processors for certain types of codes. So the upside potential is enormous.

But the transition from homogeneous to heterogeneous processing is likely to be a lot tougher than going from single- to multi-core. For one thing, the single-to-multi transition for conventional processors was a fairly simple process -- literally, just adding more of the same. But mixing different types of processing architectures into a unified system confronts the system architect with much bigger challenges. Some of the most pressing are:

  • How tightly do you couple the various processing engines -- on-chip, on the board or in the cluster? One could even envision specialized processors distributed across a LAN/WAN. (Conceptually, this model already exists as Grid computing.)
  • What mix of processing engines do you use? There are a lot to choose from today and I suspect more are on the way.
  • What will be the ratio of the different types of processors in a system?
And then there's the central problem of software. As difficult as it was (and is) to scale applications across more homogeneous processors, it will be significantly more complex to slice up applications across a heterogeneous architecture. Heterogeneous-aware software (compilers, run-times, process/job schedulers, etc.) that intelligently maps the application code onto the available processor resources will be required for any sort of productive use of such systems. But how do we design such software? This was the question most recently posed by the director of the Center for Scalable Application Development Software, Ken Kennedy: "How do you build software tools that are scalable from a system with a single homogeneous processor to a high-end computing platform with tens, or even hundreds, of thousands of heterogeneous processors?"

http://www.hpcwire.com/hpc/897414.html


Some of this may well go over your head ... all it really means is, we may well see new CPUs based on new technology & software ... coming to a screen near you soon.

Happy reading.
:user:
 
Joined
Sep 17, 2005
Messages
1,934
Reaction score
0
Cheers Mucks that was a very interesting read :thumb:

One thing:

“Intel doesn’t own a graphics chip company, so they have to have a different solution.”

Intel is a graphics chip company - and the biggest one at that.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top