Xbox 2: Inside and Out ? Part I

R

R420

The Xbox 2: Inside and Out ? Part I
By: César A. Berardini - "Cesar"
Aug. 3rd, 2004


Table of Contents

* Introduction
* The Processor


* Microprocessor concept
* History of microprocessors
* POWER to the People
* Microsoft teams up with IBM
* More POWER is needed
* Microsoft has the POWER
* Summary

* The Graphics Chip
* HD Gaming
* Embedded VRAM


* Memory and Bandwidth



Introduction

?Begun this console war has!?, Yoda would say after hearing the
comments by both Microsoft Corp CEO Steve Ballmer, who proclaimed, ?I
am betting we can take Sony in the next generation,? and Ken Kutaragi,
CEO of Sony Computer Entertainment, who promised a playable
PlayStation 3 for E3 2005. Let?s not forget Nintendo?s Revolution and
their European Managing Director slamming Microsoft.

As consoles? life cycles approach their end, we start hearing news
about the next generation of systems, with all the marketing
strategies each company is known for. Sony is talking about its Cell
processor, a microchip that is supposed to be years ahead of anything
available on the market. We know Sony is all about ?teraflops?, ?Toy
Story graphics in real-time? and the ?Emotion Engine? when it comes
its PlayStation. Nintendo doesn?t talk about hardware; they simply
don?t discuss technology at all. For Nintendo, it is all about
creativity toward gameplay design.

In regards to the Xbox, the Redmond giant initially emphasized its
superior hardware? for the simple reason it was coming a year after
the PlayStation 2. But now that the Xbox is an established player in
the videogame industry, Microsoft is determined to have the next wave
of console wars take place in their own backyard.

Microsoft is trying to convince us that software is what matters.
After all, that has been the company motto for the last two decades.
Following that trend, Xbox evangelists have only talked about
next-generation software, a.k.a. XNA, and they have stated there won?t
be any hardware discussion this year. But even those efforts can?t
stop the ?buzz? that is stirring from a handful of official
announcements talking about ?future Xbox products?, licensing
agreements, and top-secret deals. Oh, and we can?t forget about those
supposed leaked and rumored specs.

Being that without hardware there is no place to run software, today
we begin our two part look at Microsoft?s next-generation console, the
Xbox successor. While Microsoft continues to play the ?no comment?
card, there is plenty of substantial information that helps us draw a
picture of what the Xbox 2 (Xenon, or whatever you?d like to call it)
will be like. Let?s begin.


The Xbox 2 Processor

Let?s start with the true brain of any computer: the processor. Yes,
after all, a videogame console is nothing more than a computer whose
hardware is dedicated to play games instead of general purpose tasks
(PC). The info we have so far tells us that the Xbox successor will be
powered by a POWER processor. But before we can talk specifically
about this architecture, it is important to make a summary of some
basic concepts related to processors in general.


Microprocessor

A microprocessor, usually referred to as processor, is an integrated
circuit chip that performs many arithmetic and logic operations in a
short amount of time, acting as a central processing unit (CPU) of a
system. Nowadays, there are processors controlling your car
electronics, your microwave, your TV, etc.

A processor is made of transistors which are, basically, tiny
electronic switches. The processor executes a collection of
instructions based on whether these switches are on or off, with the
two possible states usually represented by a binary logic; zeros and
ones. The instructions the processor executes are Add and Subtract
(therefore multiply and divide), which compares two numbers, and moves
numbers from one area to another. These operations will be defined by
the ?instructions set? each processor design imposes.

This instruction set, plus its clock rate (or speed), and the widths
of its internal and external buses, will define the different types
and families of processors.


A Little History

The first commercial microprocessor was the Intel 4004, introduced by
Intel Corporation in 1971. This 4-bit processor was designed for
calculators and it was followed by 8-bit and 16-bit processors; with
the Intel 8086 being the most successful of all, because it started
what is now known as the x86 architecture.

As you can imagine, the techniques used to place transistors has
improved over time, thus allowing the manufacturers to place more
circuits in the same area or to increase the density of transistors
and therefore the computational power. Processors became more powerful
and their word size increased, allowing them to process up to 32 bits
of data in the 80s. Now, 64-bit processors (such as the Athlon 64 from
AMD or the IBM PowerPC 970 used in the Apple PowerMac G5) can be found
in desktop computers and they are no longer exclusive to workstations
or servers. The x86 processors we hear about all the time, from the
486 to the latest Pentium IV from Intel or the Athlon family of
processors by AMD, belong to a category known as CISC (Complex
Instruction Set Computing) which, as the name implies, has a complex
instruction set. This primarily means the instructions that the
processor has to deal with can be variable in length. The word
?complex? is utilized because each instruction can perform several
operations, including memory access and address calculations, besides
the standard arithmetic and logic operations any processor is capable
of. CISC processors were created when memory was expensive and
compilers were inefficient. The CISC term was created to distinguish
the existing processors before the invention in the mid 1980s of the
RISC processors.

RISC stands for Reduced Instruction Set Computing, which is
characterized for the use of instructions with a fixed length. RISC
architectures were developed when the price of memory was no longer a
big deal and software compilers improved. RISC puts emphasis on
software and uses simple instructions that can be performed in a
single clock cycle. The advantage of RISC architectures is that it
requires fewer transistors than its CISC counterpart and all
instructions are performed in the same amount of time, allowing a
feature called pipelining, which we?ll discuss later.


POWER to the People

In 1992, Apple, IBM, and Motorola formed a joint venture, known as
AIM, to produce a Personal Computer processor derivate from the
POWER1, a RISC CPU that IBM designed previously for servers and
workstations. Based on IBM?s 801 processor (considered the first RISC
CPU), POWER1 was the result of the America Project, which in the mid
80s, was looking to build the most powerful CPU. Right now, the POWER
architecture is in its fifth iteration, the POWER5.



The result of the AIM alliance was the PowerPC; a mainstream processor
intended for use in personal computers (hence the name) that had some
of the features found in its big brother, the POWER1. The PowerPC
processor has evolved in what today is known as the PowerPC 970, a
64-bit processor known in the Apple world as the G5.

That sums up the CISC, RISC and the POWER/PowerPC architectures. Now
let's jump to 2003.

If You Can?t Beat 'Em, Join 'Em

When in a world exclusive, we revealed that long-time rivals Microsoft
and IBM were teaming up for the Xbox successor, everyone was shocked.
Those who know about the videogame and PC industries thought this was
a joke. How could Microsoft and IBM, the partner of both Sony and
Nintendo, team up for the next Xbox? After IBM let Microsoft market
MS-DOS (which Microsoft bought for only $50,000) in 1980, allowing
Bill Gates to make his fortune, and later partner with Intel, how
could it be possible that IBM and Microsoft would form an alliance?

As soon as you try to answer those questions, common sense takes
another hit. Why would Microsoft choose the POWER architecture? If one
of the key features of the Xbox was its ease of development, thanks to
its Intel Pentium processor (an x86 architecture every programmer
knows), why would Microsoft switch from the precious Wintel platform
to another architecture? Things get more bizarre as soon as you also
take into account that this is the same architecture Apple (another
longtime rival of Microsoft) has been using. All of the sudden, there
are company names (Apple, IBM, Intel, Microsoft, Nintendo, and Sony)
that you can only associate with the idea of mixing oil and water.

Well, everything has an explanation. If you consider some of the
things that are happening in this and other industries, you can see
the correlation. The key to solving this puzzle is as simple as
knowing some technology trends and using the ?why not? approach?

Why not partner with IBM when they are the company behind the PS3?s
Cell processor? Why not partner with Big Blue when they are making
advancements in chip making techniques (like copper wires and
Silicon-On-Insulator) way ahead of Intel?


We Need More POWER

The dilemma Microsoft faces is that, if the grid computing technology
IBM is toasting for the Cell processor can do all the things they
claim, the PS2 successor could be years ahead of any other solution
its competitors come up with.

IBM, Toshiba, and Sony claim they will create a processor that breaks
Moore?s Law, making the Cell processor truly state-of-the-art. For
those who don?t know, Moore?s Law (named after Intel co-founder Gordon
Moore) states that the number of transistors in a microprocessor
doubles approximately every 18 months. The truth is that this prophecy
has been fulfilled since it was first announced in 1965 and Intel has
made huge investments in research just to extend the compliment of
this principle until 2015. As you can see, Moore?s Law isn?t about
computing power but more so about how many transistors can be placed
within a predetermined area, thus having an indication of what kind of
advancements can be achieved and the performance attainable.

Microsoft is well aware of the fact that Sony could have a processor
that breaks Moore?s Law in the next round of the console wars, thus
allowing the Japanese consumer electronic giant to win the battle once
again. For the Xbox successor, Microsoft needs more power than what a
standard solution can offer.

Basically, there are two choices. Either they can reach more
computational power by achieving higher clock speeds as a result of
placing more transistors within the same space (a faster processor),
or they could simply combine current technology to increase
performance. You?ve certainly heard the saying; ?two heads are better
than one?.

If the Cell is all what is promised and Microsoft chooses a standard
solution (as it did for the first Xbox) for the Xbox successor, the
risk of being left behind by the competition is high.

Now we have to figure out why the decision was made by Microsoft to
drop the x86 option, and why IBM?s architecture was chosen.



"I Have the POWER!"(Performance Optimization With Enhanced RISC)

Whereas the first Xbox was labeled by its detractors as a downgraded
PC in a box (because of the existing hardware specs found on PCs at
that time), this time around Microsoft plans to have a ?supercomputer
in a box? by using the POWER architecture.

As explained previously, the POWER architecture is a totally different
approach to the traditional x86 architecture. It has a RISC nature, in
contrast to x86?s CISC design. POWER was designed from the beginning
as a very powerful chip that can be scaled from the low to high end of
the server environments it was designed for. It?s in the fifth
generation, thus being a proven and mature technology and it is the
first solution that allows other companies to design and make their
own implementations of the architecture. This is exactly what
Microsoft is doing. By now, you can start to gather why Microsoft is
opting for a non-traditional architecture, and has chosen to break new
ground instead.

According to the documents leaked last April, the Xbox 2 processor is
a custom designed processor with simultaneous multithreading and
real-time graphics in mind. For the sake of this article, we?ll call
it POWERx.

POWERx unifies both the POWER and PowerPC architectures in one BMF
chip built using the most advanced chip-making technologies. It has
POWER, because it has three 64-bit 3 GHz+ cores, making it the first
videogame system processor with a multi-core design on a single die
(also known as "SMP on a chip," or "system on a chip"). That is why we
say the Xbox 2 will be a supercomputer in a box.

It is a PowerPC because it descends from the POWER5+; it has been
designed more as processor that will be used in a consumer device
rather than in a server, so it has lower power consumption, and it is
focused on floating-point performance and multiprocessing capabilities
with the inclusion of a SIMD/Vector engine, a specialized unit not
found in POWER processors. The POWERX is supposed to be a big-endian
system, contrary to the latest PowerPC processors which support both
big-endian and little-endian memory models.

As mentioned above, the POWERx features simultaneous multithreading
(SMT), which allows each of the three cores to process two threads at
time, thus making the Xbox 2 CPU a six-thread per clock cycle system.
Some of you might be wondering what a thread is and what exactly SMT
means. Although hundreds of pages could be written to explain these
concepts, in simple terms a thread is an individual sequence of
instructions and therefore simultaneous multithreading is the ability
for a single processor to handle several threads at the same time.

In the case of the POWERX, its ability to process six threads
simultaneously, will make the chip behave somewhat like six
conventional processors. This will allow multiple applications to run
independently on different cores or a single multithreaded application
to perform multiple tasks all at once.

Each of the three cores of the POWERx include a 32 KB L1 instruction
cache and a 32 KB L1 data cache, and all together they share 1 MB of
Level 2 cache.

Finally, it has been said that POWERx will be built using the most
advanced techniques, because it will be manufactured using
Silicon-on-Insulator (SOI) low-k dielectrics and Strained Silicon
techniques in order to get higher performance and lower power
consumption.


The Vector Unit

The POWERX will include in each core a Single Instruction Multiple
Data (SIMD) unit as an extension to the processor instruction set.
This specialized unit is the AltiVec Technology and was jointly
developed by Motorola, IBM, and Apple. Known in the Mac world as the
Velocity Engine, AltiVec is IBM?s answer to Intel?s MMX and
SEE/SEE2/SEE3, as well as the AMD 3DNow! vector engines.

The extension contains special instructions that help to speed up
integer and floating-point-intensive applications, when they?re
specially coded to take advantage of these new instruction sets. The
AltiVec vector unit is designed to improve the performance of any
application that can exploit data parallelism; something that
particularly applies to real-time graphics.

The performance of this special vector unit is as good as the latest
Intel offer, the SEE3. However, AltiVec has one key advantage over its
competitors; it doesn?t require programmers to write in assembly code.
By using the AltiVec C Programming Model, developers can use their C,
C++ knowledge to code for this unit.


Microprocessor Summary

The POWERX is a cutting-edge processor based on the execution core of
IBM?s 64-bit POWER5+ architecture and will be a highly parallel
implementation of the PowerPC architecture, combining vector engines
with superscalar, superpipelined execution cores. With this design,
Microsoft is also joining the industry trend that is moving towards
multithreading, multi-core designs.

This processor will be fabricated using IBM?s state-of-the-art chip
making technology using silicon-on-insulator transistors, copper
interconnects, and strained silicon techniques to build a processor
that achieves higher performance while consuming less power.


The VPU

Yes, this time the right term to describe the graphic chip is: Visual
Processing Unit, a term that ATI Technologies coined for its graphic
processors. The GPU designation doesn?t apply anymore since Microsoft
decided to drop nVIDIA in favor of the Markham, Ontario-based company
to develop custom graphics technologies for ?future Xbox products?.


What we know so far is that the Xbox 2 will feature a custom silicon
derivative of ATI?s next generation VPU, code-named R500. This graphic
chip could be ATI?s first VPU supporting DirectX 9.0?s Shader Model
3.0, since its current line-up only offers support up to the second
version of both Pixel and Vertex Shader models. A previously leaked
document claims the Xbox successor will support Shader Model 3.0 and
beyond, which can be interpreted as some of the features we?ll see in
the next version of DirectX, set to be shipped with Windows Longhorn.

Microsoft revealed at its Meltdown conference that the next DirectX
will have new features such as dynamic geometry/topology modification
and allow graphic chips to generate shadow volume, extrude shadow
polygons, and other graphic routines that must be performed on the CPU
in the actual DirectX 9 API. Finally, Microsoft promises a unified
shader model, which will be called Shader Model 4.0. Whether these
features will be included in the Xbox 2 is still unknown.

The document also claims the Xbox 2 VPU will run at 500 MHz or a clock
speed slightly above that number, which is not an impressive spec
considering today?s GPU clock speeds. However, when it comes to
graphics it is not just about speed. According to the leaked document
the VPU has 48 Arithmetic Logic Units that can execute 64 simultaneous
threads on groups of 64 vertices or pixels. These ALUs are
automatically assigned to either pixel or vertex processing depending
on the load, with each one having the ability to perform one vector
and one scalar operation in a single clock cycle, thus having a Shader
core that can execute 96 shader operations per clock cycle.

The Xbox 2 VPU is supposed to have a real peak pixel fill rate of 4
gigapixels per second, which doesn?t sound too impressive either
considering today?s graphic chip specs. We?ll have to wait for the
official announcement regarding this matter to make further
conclusions.


HD Gaming

Microsoft knows that HDTV is becoming a reality, as cable and
satellite TV providers grow their HD offerings and more people
purchase high-end televisions. By the time the next generation
consoles ship, HDTV will no longer be a thing of early adopters and
will finally become a mainstream consumer electronic.

In this scenario, it?ll be important that videogames catch the wave
and deliver an experience that resembles HDTV, known for its sharper
and clearer visuals which are up to seven times the resolution of
regular TV. It?d be a terrible letdown to have a videogame system that
uses a lower resolution than a TV broadcast. Right now something
similar happens when HDTV owners watch a DVD movie where there is a
noticeable downgrade in resolution. Soon there will be HD DVDs, which
will finally offer a film-like resolution at home. Therefore,
next-generation videogame consoles will need to keep up with other
mainstream electronics and deliver a high definition experience.



Although the current Xbox is capable of delivering HDTV video signals
using the High Definition AV Pack, in practice only a few titles have
been able to run above 480p. This not because of laziness by
developers but more so due to hardware restrictions that make it
virtually impossible to run some engines above standard resolutions.
It would be impossible to run games like Unreal Championship, Halo,
Splinter Cell, The Chronicles of Riddick, or the upcoming Doom 3 and
Halo 2, at resolutions above 480p without sacrificing a lot of visual
effects and features to run at a playable framerate.

Thankfully, the technology keeps evolving and now the latest
generation of graphic processors are able to run the most modern
engines in high resolutions. Those video cards powered by nVIDIA?s
GeForce 6800 and ATI?s X800 chips are able to run the most graphically
demanding PC games (including Painkiller, Far Cry, Doom 3, and
Half-Life 2) in 1600x1200 at a playable framerate, with all the
effects turned on. As of yet, there is only one engine that promises
to put the current generation of graphic chips on their knees; that
being the Unreal Engine 3.0. However, there won?t be any games using
this engine until 2006, when hardware will once again catch up with
the software.

So ATI knows that the VPU it is making for the Xbox successor will
have to allow game developers to run their games at least at 720p
(1280x720) and if possible at 1080i (1900x1080) resolutions.
Considering the fact that their Radeon series of VPUs integrates a set
of technologies designed to make games playable at high resolutions,
it?ll be easy to make the Xbox 2 VPU high definition capable.

Embedded VRAM

One new trick to make HD gaming a reality will be the embedded video
RAM that will supposedly be used in the Xbox 2. Video memory bandwidth
is one of the most critical aspects in today?s graphics paradigm. In
order to increase memory bandwidth you can boost memory clock speed or
have a wider path. The latest graphic cards, such as the Geforce 6800
and ATI X800, are using 256-bit interfaces to reach peak memory
bandwidths above 30 GB/sec. Other methods to increase this resource
are bandwidth saving techniques, such as data compression.

The leaked document revealed that Microsoft plans to incorporate 10 MB
of dedicated memory for use by the VPU. This is what is known as
embedded RAM because the memory is embedded directly onto the chip.
The advantage of embedded RAM is that it offers a speed and bandwidth
far superior to conventional out-of-the-chip memory. Think of it as
comparing system memory with a microprocessor's cache memory. Of
course, this memory will be of a limited size because it a lot more
expensive that regular external memory. The basic idea is to offer the
VPU a fast memory where it can move data at extremely high speeds,
with reduced latency. Whether this embedded RAM will be used as a
frame or a texture buffer, or a combination of both, is unknown.

Microsoft wants to eliminate current architecture bottlenecks and this
embedded RAM solution might be the key to enable the most advanced
visuals at high resolutions. It remains to be seen if the similarities
(ATI?s chip, IBM?s PowerPC) with the Nintendo Gamecube continue with
the use of MoSys? embedded 1T-SRAM memory for the Xbox 2 hardware.


Memory and Bandwidth

According to the leaked specs, the Xbox successor will have the same
UMA architecture its predecessor used. That is a unified memory
architecture equally accessible to both the VPU and CPU. The paper
claims the bandwidth available for the processor and graphic chip to
access the system memory is 22.4 GB/sec; meaning that, the memory
clock speed should be around 700 MHz. Again, we have our doubts
regarding these specs as they don?t sound too impressive for hardware
that is shipping in a year and a half at the earliest.

The Xbox successor will supposedly have 256 MB of system memory, but
again this number will likely change in the future. There are already
games out there (such as Doom 3 for the PC) that require 512 MB of
video memory to run in a mode called Ultra Quality where nothing is
compressed.

Finally, the paper claims the embedded RAM has a 32GB/sec bandwidth,
which means this memory could have a clock speed around 1 GHz DDR,
which sounds like a logical number considering the VPU clock speed.
This bandwidth allows the embedded RAM to receive eight pixels every
VPU clock cycle, and these pixels can be expanded through
multisampling techniques to 4 samples, for up to 32 multisampled pixel
samples per clock cycle. With alpha blending, z-test, and z-write
enabled, this is equivalent to having 256 GB/sec of effective
bandwidth.

Another thing the leaked specs touched on is the possibility of
offloading some of the work from the VPU to the AltiVec vector units
found in each core.


We can only imagine what kind of visuals these hardware specs will
make possible, but the tech demos shown by Epic Games to promote their
Unreal Engine 3.0 might be a good example of the graphics we?ll see
once the Xbox 2 ships.


What?s Next in Part II

Be sure to check back tomorrow as we continue our in-depth look at the
Xbox successor. We?ll venture into the Xbox 2?s console and controller
design, the hard drive (or lack thereof), backward compatibility, and
Microsoft?s XNA initiative. Plus, we?ll reveal which development
houses are already working on the Xbox 2.
 
R

Rich S

wish people would stop going on about xbox 2 like they know
its not released
its all skepticism
and what people say is unsubstantiated and bollox
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top