Intel strikes back with a parallel x86 design

N

Nick Maclaren

On Fri, 30 Sep 2005 17:45:43 -0400, George Macdonald wrote:

OTOH, contrary to Nick's point, the 5150s did not ship with a 3270
emulator card. Those came later (IIRC IBM wasn't even first), as it
became obvious the PC was a better and cheaper 3270. Saying that the The
PC was originally intended to be a 3270 replacement is "new history", at
best.

If you had been closely involved, you would know what was being
planned, and would not have jumped to erroneous conclusions.
I was involved in the SAA project, in several ways, and the IBM
PS/2 was indeed intended to replace 3270s. But not in the way
that you think. You are correct that the original IBM PC was not
intended to replace 3270s, but nobody said that it was.

IBM had realised that 3270s were too limiting, and that the future
was GUIs. So IBM set up the CUA project to design a common user
access model for display stations (3270s) and GUI workstations
(PS/2s). The intent was that applications would be designed to
support both, and that the former would gradually be phased out
in favour of the latter.

Do you understand now?

The above is, of course, twaddle - as almost everyone who used a good
68K based system can witness. The immense design superiority of the
range over the x86 range allowed it to run a real operating system,
with all of the advantages that implied. The first x86 CPU that
could run more than a crippled operating system was the 80386, and
there were some significant "gotchas" with that. That is one of the
reasons that relatively few fully functional operating systems were
shipped for the x86 line until the 486 arrived. The 68K had them
from the 68000 onwards.
It was supposed to be. ;-) Remember, RISC was the savior, CISC was
dead-end. Oops.

Well, yes, but the PowerPC project was about designing a SYSTEM and
not just a CPU. It was actually a damn good design, for its time,
and would have beat all of the 80386-based systems (except perhaps
Sequent) into a cocked hat - had it been delivered :-(


Regards,
Nick Maclaren.
 
A

Anne & Lynn Wheeler

keith said:
While I agree with your sentement, there wasn't any lack of a
graphics card. The CGA card and monitor were announced and shipped
with the 5150. I had a CGA card (couldn't afford the monitor) on my
"first day order" 5150. ...along with the monochrome card and
monitor.

OTOH, contrary to Nick's point, the 5150s did not ship with a 3270
emulator card. Those came later (IIRC IBM wasn't even first), as it
became obvious the PC was a better and cheaper 3270. Saying that
the The PC was originally intended to be a 3270 replacement is "new
history", at best.

I had first day employee order ... the day before it arrived, the
prices were lowered ... and I could pick up same configuration at
computerland for less than i had paid in the employee order.

the PC wasn't originally intended to be 3270 replacement ... but it
(eventually) got big boost in market penetration when it started being
used for terminal emulation. in fact, before mac was announced ... I
even had arguments with some of the mac developers about its chances
of success w/o some commercial support ... like terminal emulation (my
brother was regional apple marketing rep ... he claimed he had the
largest physical territory in continental US ... anyway when he came
into town, we had dinners with various people).

in any case, terminal emulation in turn, resulted in large install
base of equipment around the terminal emulation business. then with a
large terminal emulation install base ... it had to be protected. the
growing capability and emerging client/server paradigm was a threat to
this install base. SAA supposedly was that applications could be run
everywhere ... but there was a lot of money being spent on porting PC
applications to the mainframe ... hoping to stuff the client/server
genii back into the bottle.
http://www.garlic.com/~lynn/subnetwork.html#unbundle

in this period ... we had come up with 3-tier architecture, middle
layer, etc and were out pitching it in customer executive presentation
.... which didn't endear us to any in the SAA crowd (and since it was
also ethernet oriented ... didn't make any friends with the t/r
people). In previous life, I had worked frequently with the person
responsible for SAA ... so we would periodically stop by his big
corner office in somers (some joke he could almost see endicott from
it) and give him a bad time about the reality of SAA prospects.
http://www.garlic.com/~lynn/subtopic.html#3tier

one of the threats to the terminal emulation business was the
increasing number of business applications showing up on PCs ... this
in turn was driving big business for PC hard disk capacities ... and
you started seeing a noticeable decline in the growth of mainframe
disk sales as business data leaked into the distributed
environment. the disk division came up with several products that
would provide extremely high-thruput disk access to glass house data
by PCs and other distributed processes ... with lots of business case
justification like data backup, integrity, disaster/recovery, etc
(they even had statistics on businesses that declared bankruptcy when
unbacked up data on PC disks was lost).

In any case, this resulted in significant wars between the disk
division and the division responsible for terminal emulation business
.... with the division responsible for terminal emulation business
claiming total strategic responsibility for everything outside the
walls of the mainframe machine room. One of the disk division's
retaliations was giving a presentation at a large internal conference
going into gory detail about how the communication division was going
to be directly responsible for the demise of the mainframe disk
division (the limited terminal emulation spigot was greatly
accelerating the migration of data residency out into the distributed
environment)

as an aside ... there was a acorn software project out on the west
coast; boca had said it wasn't doing or involved in software ... and
it was perfectly reasonable for the west coast group to do software.
posssibly monthly, boca was contacted to reaffirm that they weren't
interested in software and west coast was free to take on software
mission. then at some point, boca changed its mind and effectively
said that the west coast group couldn't do the software mission ..
and if the people involved wanted to be involved in software mission
for acorn ... they had to move to boca.

some past posts mentioning acorn
http://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
http://www.garlic.com/~lynn/2003c.html#31 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003d.html#19 PC history, was PDP10 and RISC
http://www.garlic.com/~lynn/2003e.html#16 unix
http://www.garlic.com/~lynn/2005q.html#24 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
 
N

Nick Maclaren

On Thu, 29 Sep 2005 08:32:06 +0000, Nick Maclaren wrote:


The project was never "announced". It died a silent, though excruciating
death.

All right, when it was officially admitted in public :)
Are you talking about OpenPC, or some such? "PowerPC" is a processor
architecture. I don't believe it was used for a system, though could be
wrong.

It was. I still don't think that I have any of the design documents,
but the term was certainly used for the system design in the early
days, perhaps unofficially. Now you mention it, the name OpenPC
rings a bell as the official name, but you know the reaction of
technical staff to official names :)

And I really can't remember what the original PowerPC PC was called
when it was released - though it probably had some other name entirely.


Regards,
Nick Maclaren.
 
A

Anne & Lynn Wheeler

oops, unbundle was brain-check ... attention momentarily wandered to a
posting doing in parallel concernting free software from the 60s and
the transition to priced software in the 70s ... aka
http://www.garlic.com/~lynn/2005r.html##7 DDJ Article on "Secure" Dongle
i.e.
http://www.garlic.com/~lynn/subtopic.html#unbundler

it should have been reference to collection of postings on
terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

note that 20 years ago ... the disk division was starting to come up
with these mainframe nas/san like solutions for the distributed
environment ... and the communication division was constantly killing
the efforts ... protecting their position as owning everything that
crossed the wall of the mainframe machine room.

it also gave my wife fits earlier when she was con'ed into going to
pok to be in charge of loosely-coupled architecture. stuff that she
could potentially do between mainframes in the same machine room
couldn't be done when it was between the same mainframes but in
different machine rooms (because it then became the responsibility of
the communication division). it is possibly also why the mainframe
fiber-optic interconnect languished in pok for a decade ... the
mainframes machine rooms had to get big enuf and complex enuf to
justify fiber intra-room interconnect ... and a very few battles had
to be won with the communication division to allow some to cross the
boundary of the machine room wall (to be used for inter-room
interconnect).

when i was doing high-speed interconnect project ... i purposely
called it HSDT (high-speed data transport)
http://www.garlic.com/~lynn/subnetwork.html#hsdt

to try and help differentiate it from communication (and the
responsibility of the communication division). something that sort of
highlighted the difference ... I was about to leave on a trip to the
far east to contract for some equipment. the friday before i left,
somebody from the communication division announced a new discussion
group on "high-speed" ... and offered the follow definitions for use
in the discussions

low-speed <9.6kbits
medium-speed 19.2kbits
high-speed 56kbits
very high-speed 1.5mbits

monday morning in a conference room outside of Tokyo were these
definitions on the wall:

low-speed >20mbits
medium-speed 100mbits
high-speed 200-300mbits
very high-speed >600mbits
 
D

Del Cecchi

keith said:
There is (or at least was) a legal difference. Programmers are
classified
differently than engineers because of the differences in overtime rules
forced by a few law suits in the '70s.


Because of the above overtime rules. Programmers doing "coding" are
non-exempt (kinda/sorta, under certain circumstances). Engineers are
exempt. This wasn't really a technical distinction.


Many. PL/AS was never shipped to customers, AFAIK. Many of the docs
(e.g. language reference) were Registered Confidential.


Rochester always did things "differently". ;-)

There were also differences related to European legal settlements that
required unbundling and stuff like that. The code written by engineers
was hardware so stayed with the hardware and didn't have to be released
like software did.

del
 
A

Anne & Lynn Wheeler

for some additional drift from terminal emulation to nas/san ... in
the mid-80s both LANL and NCAR (boulder) were doing things were
standard ibm mainframe was handling tape library and disk farm for a
variety of supercomputers on the same machine room floor. they used
HYPERchannel for both processor (messaging) interconnect and
processor/device (I/O) interconnect.

ncar would possibly have a cray make a request to ibm mainframe for
some data. the mainframe would possibly stage it from tape to disk (if
necessary) and then initialize the appropriate HYPERChannel remote
device adapter (it emulated an ibm channel) that the particular disk
controller was attached to. a coding sequence was then returned by the
ibm mainframe to the cray (HYPERChannel message). The Cray would then
signal the indicated HYPERChannel remote device adapter with the
correct coding sequence ... to do the actual (direct) data transfer.

LANL was in large part behind the IEEE standardization work for cray
channel as HiPPI. There was some effort in both IPI3 (disk) standards
activity and HiPPI switch standardization ... to make sure that
*third-party* transfers (of the kind NCAR was doing with the
HYPERchannel remote device adapter) were supported.

I got asked advice on some of this because in the early 80s, I had
done some mainframe device driver work for HYPERChannel.

A specific situation in the early 80s was that the Santa Tersa Lab was
overflowing and there was a decision to move 300 IMS (database)
programmers off-site ... but with remote access to the machine room.
Remote 3270s were evaluated (max at 9600 baud operation) and quickly
discarded. So the solution was to create high-speed connection between
STL(bldg.90) and the new offsite location ten miles away
(bldg. 96/97). HYPERChannel was to be used for channel extension with
300 "local" 3270s (and other devices) at the remote site. It was
possible to scavange some bandwidth from the "campus" T3 collins
digital radio ... aka bldg. 12 on the main plant site had digital
radio to the repeater tower on the hill above STL ... and also
line-of-sight to the roof of the new off-site bldg. It just required
hooking up the appropriate circuits ... and making sure they had
point-to-point link encrypters.

Folklore is that when hiway 85 elevated section was first
opened... cars had their radar detectors trip when crossing the path
between bldg.12 and the STL repeater tower ... approx. here:
http://www.mapquest.com/maps/map.ad...an Jose&state=CA&zipcode=95119&searchtab=home

link encrypters were used on everything ... at one point in the
mid-80s, there was a claim that internal operations had well over half
of all link encrypters in the world (and all the orders established at
least one company in the crypto business).
 
K

keith

If you had been closely involved, you would know what was being
planned, and would not have jumped to erroneous conclusions.

Bullshit. Read Lynn's response to mine.
I was involved in the SAA project, in several ways, and the IBM
PS/2 was indeed intended to replace 3270s. But not in the way
that you think. You are correct that the original IBM PC was not
intended to replace 3270s, but nobody said that it was.

IBM had realised that 3270s were too limiting, and that the future was
GUIs. So IBM set up the CUA project to design a common user access
model for display stations (3270s) and GUI workstations (PS/2s). The
intent was that applications would be designed to support both, and that
the former would gradually be phased out in favour of the latter.

Again, we were discussing the 5150, not what came more than half-decade
later. Certainly by the time the PS/2 came about the PC as a 3270
replacement was well underway. We were ripping 3270s out and replacing
them by the hundreds, long before the PS/2 was announced.
Do you understand now?

I hope you do understand that the topic being deisussed what the
*original* PC, not the PS/2. Sheesh!
The above is, of course, twaddle -

Why don't you tell the person who wrote it?
 
O

Oliver S.

And I expect furure CPUs to to go a step back and completely drop
That will sure do wonders for integer performance, right?

Yes, because it seems easier to get a high performance with simple
cores than with full blown brainiac out-of-order-cores. Niagara will
demonstrate that.
 
K

keith

Yes, because it seems easier to get a high performance with simple
cores than with full blown brainiac out-of-order-cores.

It seems there are at least a few people who think so. We'll see, but I'm
nopt convinced. The P4 certainly is one data point in the other direction.
Niagara will demonstrate that.

You sound so sure of yourself!
 
G

George Macdonald

What didn't you like about S/3? I suppose you didn't like its
descendents S/32, S/34, S/36? The instruction set was a stripped down
storage to storage resembling the S/360. Or you have something against
RPG? The cute little cards?

I wasn't that involved with it but my occasional brushes with it shocked me
- yeah the funny cards, the floppy disk, the odd little display embedded in
the desk... I was always amused by the "stick" display on the operator
panel of the one I saw, which seemed to have incandescent bulbs behind the
sticks.

I was told it had no general purpose registers and 3 "index registers" and
was glad I never had to program the thing, though our company did a linear
programming system for it... in assembler of course, a project which I
thankfully managed to steer clear of. It always struck me as more of an
electronic tabulating machine than a real computer but maybe that's just a
bias based on my previous experiences at the time.
 
N

Nick Maclaren

Bullshit. Read Lynn's response to mine.

What the HELL does that have to do with your misquoting of what I
said? I neither said that the original PC shipped with a 3270
emulator card, nor that it was intended as a 3270 replacement. You
jumped to erroneous conclusions and therefore assigned fatuous
statements to me that I did not make.

I had mistaken your identity, and you therefore should know the
events, and have therefore recognised the projects and timescales
I was referring to.

Are you SERIOUSLY claiming that the PS/2 was not, inter alia, intended
as a 3270 replacement?
Umm, PS/2 <> 5150. PS/2 was *not* the original PC, which is what we are
alking about.

Not at all. The thread to which I was responding was talking about
the later, PowerPC era. It started when I said that there were two
other systems that could have stopped the rise of the x86, and both
had failed because of the incompetence of others (i.e. not Intel),
the 68K range and the PowerPC. As you should know, the periods when
the x86 range was vulnerable to those two did not overlap.


Regards,
Nick Maclaren.
 
B

Bernd Paysan

Anne & Lynn Wheeler wrote:the friday before i left,
somebody from the communication division announced a new discussion
group on "high-speed" ... and offered the follow definitions for use
in the discussions

low-speed <9.6kbits
medium-speed 19.2kbits
high-speed 56kbits
very high-speed 1.5mbits

monday morning in a conference room outside of Tokyo were these
definitions on the wall:

low-speed >20mbits
medium-speed 100mbits
high-speed 200-300mbits
very high-speed >600mbits

;-). What about that definition:

low-speed: mean-1sigma
medium-speed: mean
high-speed: mean+1sigma
very high-speed: mean+2sigma
 
K

keith

What the HELL does that have to do with your misquoting of what I
said? I neither said that the original PC shipped with a 3270
emulator card, nor that it was intended as a 3270 replacement. You
jumped to erroneous conclusions and therefore assigned fatuous
statements to me that I did not make.

I didn't misquote you at all, Nick. A while back in this thread, you
said (on 09/29/2005 07:50:02 AM, according to my server):

"The fact of the matter (whether you like it or not) is that Intel
established itself as the chip maker for the IBM PC, which was never
intended by IBM to be used as much more than a programmable terminal."

The IBM PC was the 5150. Nowhere did you say PS/2, and implied "IBM PC,
which was bever intended" was the *original* intent behind the 5150. Maybe
you ought to be a little more careful what you write.
I had mistaken your identity, and you therefore should know the
events, and have therefore recognised the projects and timescales
I was referring to.

What's my identity got to do with it? What you wrote was BS, and I
corrected what you *wrote* (who knows anymore what you meant). I'm not a
mind-reader, Nick.
Are you SERIOUSLY claiming that the PS/2 was not, inter alia, intended
as a 3270 replacement?

The PS/2 was not under discussion (see above).
Not at all. The thread to which I was responding was talking about the
later, PowerPC era. It started when I said that there were two other
systems that could have stopped the rise of the x86, and both had failed
because of the incompetence of others (i.e. not Intel), the 68K range
and the PowerPC. As you should know, the periods when the x86 range was
vulnerable to those two did not overlap.

Then say what you mean! PowerPC <> PS/2 <> IBM PC. Sheesh!
 
D

Del Cecchi

George Macdonald said:
I wasn't that involved with it but my occasional brushes with it
shocked me
- yeah the funny cards, the floppy disk, the odd little display
embedded in
the desk... I was always amused by the "stick" display on the operator
panel of the one I saw, which seemed to have incandescent bulbs behind
the
sticks.

I was told it had no general purpose registers and 3 "index registers"
and
was glad I never had to program the thing, though our company did a
linear
programming system for it... in assembler of course, a project which I
thankfully managed to steer clear of. It always struck me as more of
an
electronic tabulating machine than a real computer but maybe that's
just a
bias based on my previous experiences at the time.

I think the display in the desk was a system/32. System 3 had no display
at first. Later used the twinax attached displays.

And you are dissing it based on second hand rumors from 30 years ago?
Shame on you.

del
 
S

Stephen Fuld

Nick Maclaren said:
Think back to the 1970s. As the Wheelers can witness, most of those
were not available under MVS (or, at least, were hopeless),

I am not going to get into an argument about the quality of the tools; I
simply state that they were available. I should note that some of the
functions I listed above were available from third parties. IIRC, all of
the tools in the list were available on MVS (and in fact, I think at least
most of them were available on OS/360 (talk about Dire!)
and CMS
was extensively used for developing for MVS. Yet IBM classified CMS
as a system for scientific/technical use rather than for business/
commercial (though it wasn't that simple).

[ IBM's standard TSO editor was like an interactive version of IEBUPDTE,
to give just one horrible example. ]

But the TSO editor was there, and clearly intended for program development
And, again, in the mid-1980s. IBM's plans for SAA were that the
development of MVS applications would be done on OS/2, and that many
of those tools would not be available at all on MVS. Seriously.

SAA was pretty much a total flop, as it was driven purely to meet IBM's
marketing needs and not any customer need.

snip
An aspect that has never concerned me directly, but I have observed
several times with several vendors, is that the fancy development
tools (especially debuggers) often work with Fortran and C, but not
Cobol, even when they run on the same system. This often has the
effect that the Cobol system ships with its own set of debugging
tools, and debugging Cobol+Fortran codes becomes a nightmare.

Well, COBOL was, and is pretty much a secondary product from most vendors
other than IBM and some of the other old mainframe vendors. I am not
surprised if the example you gave is true. :-( The tendency to develop
commercial applications with C is one I decried regularly.
One of the more common problems I hit is that parallel support (e.g.
MPI), batch schedulers etc. are classed as "scientific/technical"
and "high-RAS" products (whether management environments, automated
log management or high-RAS file systems) as "business/commercial".
Yes.

Are they validated/tested together? Don't be silly. Do they work
together? Only in demonstrations. This IS improving, as more of the
commercial customers start to use the parallel tools and schedulers,
but is still a problem.

Agreed, but the lack of coordination problem is orthogonal to who develops
what or who uses what.
 
A

Anne & Lynn Wheeler

George Macdonald said:
The bottom line, which I was trying to get across, is that in a
software system which exercised the CPU across all its spectrum of
operations, the 68K was a dog. It was just as much a dog at pure
"commercial" work as it was at quasi-scientific "business"
application work - the CPU was just slow in general and saddled with
an idealistic orthogonal ISA. The 386 was simply a better performer
and the PowerPC was never going to better enough.

byte article from 11/96 ... a little later in time
http://www.byte.com/art/9611/sec6/art13.htm

PowerPC Regroups

Stung by Intel's gains in processor performance, the PowerPC alliance
will strike back with higher clock speeds and new chip designs.

Don't count your megahertz before they're hatched. That's what the
PowerPC alliance has learned after prematurely gloating over the
imagined obsolescence of Intel's x86.

One famous advertisement from 1992 showed how CISC performance was
falling flat while RISC technology soared toward the SPECint
stratosphere. Another ad warned about the coming fate of x86-based PCs
by picturing a highway running smack into a brick wall.

.... snip ...

and a little earlier also from byte
http://www.byte.com/art/9411/sec8/art5.htm

PowerPC 620 Soars

Its faster logic, shorter pipelines, and high-speed interface endow it
with processing power that raises it to workstation and server caliber

.... snip ...

and a little discussion of power and power/pc
http://www.research.ibm.com/journal/rd/385/preface.html

During the four years since the RISC System/6000* (RS/6000)
announcement in February of 1990, IBM* has strengthened its product
line with microprocessor enhancements, increased memory capacity,
improved graphics, greatly expanded I/O adapters, and new AIX* and
compiler releases. In 1991, IBM began planning for future RS/6000
systems that would span the range from small, battery-operated
products to very large supercomputers and mainframes. As the first
step toward achieving this "palmtop to teraFLOPS" goal with a single
architecture, IBM investigated further optimizations for the original
POWER Architecture*. This effort led to the creation of the PowerPC*
alliance (IBM Corporation, Motorola*, Inc., and Apple* Computer
Corporation) and the definition of the PowerPC Architecture*.

... snip ..

power was rios ... was a traditional 801/risc architecture ... no
cache consistency, no provision for SMP, some number of hardware
trade-offs issues from the original idea from the 70s. however, the
demise of various proprietary efforts in the early 80s ... and the
romp displaywriter followon ... had it stray from proprietary into the
world of unix and (more) open systems. i was assured at an advanced
technology conference in the mid-70s that the use of 16 segment
registers for virtual memory support ... was a hardware simplicity
trade-off ... aka a lot of 801 was swinging the pendelum to the
opposite extreme in reaction to the extremely complex FS (which was
eventually killed w/o being announced):
http://www.garlic.com/~lynn/subtopic.html#futuresys

.... in any case, the claim at the time ... was the combination of no
protection domain and ability of inline application code could change
(virtual memory) segment register values (as easily as general
register address pointers could be changed) would more than compensate
for the limited number of segments (for various memory mapped paradigm
implementation).
http://www.garlic.com/~lynn/subtopic.html#801

in some sense ... somerset and powerpc was going after both larger
volume market ... as well as migrating to somewhat more traditional
processor architecture with support for cache consistency,
multiprocessor operation, etc.

at the time, the executive we directly reported to in the hardware
group when we were doing ha/cmp ... misc ha/cmp refs (note none of the
executives mentioned in the following post is anybody we directly
reported to):
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/subtopic.html#hacmp

had previously worked for motorola. with the formation of somerset, he
moved over to head up somerset and the powerpc effort.
 
N

Nick Maclaren

I didn't misquote you at all, Nick. A while back in this thread, you
said (on 09/29/2005 07:50:02 AM, according to my server):

"The fact of the matter (whether you like it or not) is that Intel
established itself as the chip maker for the IBM PC, which was never
intended by IBM to be used as much more than a programmable terminal."

Ah. I plead guilty to being unclear, but please read that again.
That does NOT say that IBM intended to replace 3270s by that, for
a start. Secondly, the context was of the use of such things in
IBM's business/commercial marketplace, and the intent was that it
would be used primarily as a data entry and display device (perhaps
offline, perhaps not). It never crossed the mind of IBM's business/
commercial division that it would be used as anything more IN THEIR
MARKET, and we had that problem right up until the end of the life
of the PS/2.

I apologise for flying off the handle (first) in response to a plain
misunderstanding.


Regards,
Nick Maclaren.
 
A

Anne & Lynn Wheeler

Stephen Fuld said:
But the TSO editor was there, and clearly intended for program
development

there are lots of jokes among customers about TSO being too slow to
use for interactive (even some recent threads in ibm-mainframe group).
basically TSO was slightly better than using a keypunch.

when we were arguing with product group about being able have
subsecond response with the new 3274 display control units ...
effectively TSO came down on the side of the 3274 group ... since they
never had subsecond response ... even with the faster 3272 controller.
Basically 3274 group defined their target market (what is cause and
what is effect?) as data entry (previously done on keypunches) which
don't have any issues with system performance and response.

reference to old report with numbers comparing 3277 & 3274:
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

from above:

hardware TSO 1sec. CMS .25sec. CMS .11sec
3272/3277 .086 1.086 .336 .196
3274/3278 .530 1.530 .78 .64

....


as mentioned in the above reference the numbers are very idealistic,
optimistic numbers for TSO (talk to customers that complained about
several seconds being more typical) ... and normal range of measured
numbers for production CMS environments. The joke in the above was
that .25sec was nominal for most CMS operations ... but the .11sec was
typical of several places running my latest performance tweaks (at the
particular point in time that the 3274/3278 issue was being debated).

also the hardware numbers are for direct channel attached controllers;
SNA managed controllers had significantly worse hardware response
(even local SNA controllers ... but remote SNA controllers were the
pits)

there was passing reference to subject in previous post
http://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a parallel x86 design

where the IMS development group being remoted off-site couldn't face
the prospect of using remote 3270s ... even in conjunction with their
normal development environment (aka most of the machines at STL ...
which housed IMS, DB2, PLI, APL, Pascal, and number of other
development groups ... were vm370/cms).

Part of the issue in the 3272/3274 was that the 3274 had moved some
amount of the electronics & logic out of the 327x terminal head ...
back into shared controller logic (reducing 327x terminal
manufactoring costs but contributing to performance issues).

For some of us ... even local channel, "fast" 3272/3277 had some
issues ... while the controller operated at 640kbytes/sec ... there
were still some half-duplex latency issues. A little soldering in the
keyboard and additional electronic modification in the display head
.... took care of some of the issues.

In spring of '85, a mainframe channel interface card was produced for
PC/AT (16bit isa bus) that was configured with pc/at with PCNET lan
cards and some slight emulation of psuedo 327x terminal operation.
emulator was then written for PCs on the PCNET lan ... that was
loosely 3270-like ... as well as some enhanced controller support
software on vm370 mainframe. since it was internal operation ...
various liberties were taken with all of these to eliminate as many
annoying characteristics as possible.

This was quickly replaced with enet lan cards. About a decade later
and the technology was finally starting to best the turbo-charged 3277
environment.

In the 70s, an internal 3270 telnet-like server/demon terminal
emulation package had been developed for vm370. it included support of
program interface to application running in local cms virtual machine.
programmatic scripting language quicly evolved for this interface
.... with a lot of features that would later appear in HLLAPI support
(pc application doing 327x screen scraping). the most prevalent
internal was the parasite/story package done in the UK ... past
parasite/story posting
http://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question

note that the REXX author had used some of the same basic technology
for implementing a multi-user spacewar game (that supporting
time-sharing users on local machines and/or remote users over network
links).

Then my home 300baud/ti700 was upgraded to 1200buad/3101. 3101 was
basically glass teletype ... but had something called "block mode".
You could either dial into the system as glass teletype ... or you
could connect directly to the 3270 server/demon and have it drive the
terminal in block-mode. The internal 3270 server/demon had been
upgraded to directly drive 3101 block mode and use its features to
optimize the terminal operation.

When I got my employee purchase ibm/pc ... minor previous reference
http://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design

i was able to replace the 300baud/3101 setup with 1200baud/ibmpc
terminal emulation. a greatly enahced package was developed for the
ibm/pc that also interacted with a whole bunch of new features in the
3270 server/demon (available when it was directly driving the
line). fundamental was sophisticated transmission compression as well
as dictionary of common stuff and cache of already transmitted stuff
(the host server/demon kept state about what was in the pc cache ...
and instead of of doing compressed transmission ... it had control
features to display stuff from the cache).

it was this basic technology infrastructure (3270 server/demon) that
then had enhancements to drive the channel-attached PC/AT LAN gateway
for local terminal emulation.

,,, some topic-drift ... the home terminal program initially developed
a dial-in interface that supported callback (aka you dialed,
identified yourself ... and the interface then hung up and called back
the number listed for the identification). this was enhanced with a
encrypting 2400baud hayes-compatible async card ... that did a sort of
SSL session hand-shake (not using public key tho), establiehed session
key and then ran encrypted session. this was then required for all
home terminals and people using portable terminals/laptops dialing in
from hotels (a detailed vulnerability study had turned up hotel PBXs
as being one of the most likely compromised points in the
infrastructure).

folklore has it that one of the early prototype crypto asyncr cards
wad provided to a senior executive. he had been an old time EE ... and
during testing touched his tongue to the contacts in the phone jack
.... just as the phone rang. after that it was mandated that all async
cards built by the corporation had to have recessed jack contacts (so
innocent individuals, like corporate senior executives, couldn't touch
them with their tongue).

various past other posts discussing issues using HYPERChannel for
mainframe channel extension (used for moving 300 from the ims group to
off-site location but allowing them retain their "local" 3270s).
http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000f.html#30 OT?
http://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#34 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001k.html#46 3270 protocol
http://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
http://www.garlic.com/~lynn/2002.html#10 index searching
http://www.garlic.com/~lynn/2002j.html#67 Total Computing Power
http://www.garlic.com/~lynn/2002j.html#74 Itanium2 power limited?
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
http://www.garlic.com/~lynn/2003h.html#15 Mainframe Tape Drive Usage Metrics
http://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
http://www.garlic.com/~lynn/2005e.html#13 Device and channel
http://www.garlic.com/~lynn/2005e.html#21 He Who Thought He Knew Something About DASD
http://www.garlic.com/~lynn/2005r.html#10 Intel strikes back with a parallel x86 design
 
N

Nick Maclaren

there are lots of jokes among customers about TSO being too slow to
use for interactive (even some recent threads in ibm-mainframe group).
basically TSO was slightly better than using a keypunch.

It was slightly better than using a CARD punch, but not as good
as using a (non-IBM) paper tape one! Admittedly, you then had
to get the data into MVT :)


Regards,
Nick Maclaren.
 
S

Sander Vesik

In comp.arch Oliver S. said:
Yes, because it seems easier to get a high performance with simple
cores than with full blown brainiac out-of-order-cores. Niagara will
demonstrate that.

I find it very unlikely that Niagra will demonstrate that. It might
demonstrate just how small fraction of server workloads need fast
single thread performance when multiple threads are available, but
being fast with simple high instruction rate core has never been
(AFAICT anyways) even close to what Niagra is about.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top