Intel strikes back with a parallel x86 design

K

keith

there are lots of jokes among customers about TSO being too slow to
use for interactive (even some recent threads in ibm-mainframe group).
basically TSO was slightly better than using a keypunch.

Dunno, I used TSO for more than a decade for interactive applications.
The editor of choice was SPF, then ISPF. We used TILES for logic entry
and I even built a part number database for the management types using the
ISPF panel and table calls. TSO/MVS wasn't *that* bad.
when we were arguing with product group about being able have subsecond
response with the new 3274 display control units ... effectively TSO
came down on the side of the 3274 group ... since they never had
subsecond response ... even with the faster 3272 controller. Basically
3274 group defined their target market (what is cause and what is
effect?) as data entry (previously done on keypunches) which don't have
any issues with system performance and response.

We had second(ish) *trivial* response when we were co-located with the
hardware. With a block editor this wasn't too bad (sure beat 2741s on
acoustic couplers). In Bldg, 701 such performance was rather tough to do
since the systems were all in SRL (about five miles away) and there were
three heads per 9600bps modems. :-(
 
A

Anne & Lynn Wheeler

keith said:
We had second(ish) *trivial* response when we were co-located with the
hardware. With a block editor this wasn't too bad (sure beat 2741s on
acoustic couplers). In Bldg, 701 such performance was rather tough to do
since the systems were all in SRL (about five miles away) and there were
three heads per 9600bps modems. :-(

that is one reason why when 300 people from the ims group were moved
to an off-site location ... they refused to go with remote 3270s; they
had a hard time tolerating any of the "local" SNA &/or MVS flavors ...
just give them local channel attach 3270s to cms (1980s time-frame).

there was some 2nd order effects that actually resulted in moving the
local 3274s for 300 people offsite ... using HYPERCHannel for channel
extension over microwave link ... actually improved performance.

while we are talking about channel-attach *local* 3274s ... there were
a number of performance issues with the 3274s. One was that they had
very high I/O command processing overhead ... which significantly
increased channel busy time (even when we are talking about channel
data transfer rates of 640kbyte/sec for 3274 controllers.

At the time, the STL machines were configured with 16 channels ...
and had large number of disks and 3274 controllers intermixed on all
channels.

The HYPERChannel configuration involved moving the *local* channel
attach 3274 to remote site ... and connected them to HYPERChannel A510
remote device adapters (A510s emulated ibm mainframe channels). Then
HYPERChannel A220 device adapters were attached directly to the IBM
channel .... and the HYPERChannel transport layer handled the
connection between the local A220 channel attach box and the remote
A510 ibm channel emulation box. It turns out that the A220 was a much
faster box than the 3274 and resulted in significantly lower channel
busy time doing the same exact operations (vis-a-vis configuration
with 3274s physically attacked directly to IBM channels).

The reduced channel busy resulted in better disk i/o thruput and an
overall system thruput improvement of 15-20 percent. The overall
system thrput improvement contributed to also improvement of trivial
interactive response ... which more than offset any increase in
latency involving HYPERChannel and microwave links.

When my wife had been con'ed into going to POK to be in charge of
loosely-coupled architecture ... misc. references
http://www.garlic.com/~lynn/subtopic.html#shareddata

.... one of the groups she worked with was the POK interconnect group
.... which primarily was doing CTCA and then 3088 ... but had hopes of
some day of getting fiber-optic interconnect out the door (which they
eventually were able to did as escon). To some extent, the POK
interconnect group and my wife fought the same battles with SNA
organization ... over being forced to revert to SNA operation anytime
they crossed the machine room wall boundary.

However, when it came time to try and get my HYPERChannel device
drivers released as IBM products ... not only were there loud howls of
objection from the SNA organization ... but the POK interconnect group
came down on the side of the SNA organization. In this scenario they
were still harboring hopes that they could prevail over the SNA
organization and get out their high-speed fiber optic interconnect
.... and have it available crossing the machine room boundary. The POK
interconnect group felt that the HYPERChannel interconnect was
(potentiallY) as much a longterm threat to their objectives as it was
a threat to the SNA organization's products (so it was time to circle
the wagons, forget for the moment their differences and oppose the
common enemy).

... a little history ... Cray and Thorton worked together at CDC; Cray
left to found Cray Research to build supercomputers, Thorton left to
do high-speed heterogeneous i/o interconnect and founded Network
Systems ... which produced HYPERChannel. More recently, NSC was
acquired by STK ... which just recently was acquired by SUN.
 
G

gerard46

| keith wrote:
|> Lynn Wheeler wrote:
|>> Stephen Fuld wrote:
|>> But the TSO editor was there, and clearly intended for program
|>> development

|> there are lots of jokes among customers about TSO being too slow to
|> use for interactive (even some recent threads in ibm-mainframe group).
|> basically TSO was slightly better than using a keypunch.

| Dunno, I used TSO for more than a decade for interactive applications.
| The editor of choice was SPF, then ISPF. We used TILES for logic entry
| and I even built a part number database for the management types using the
| ISPF panel and table calls. TSO/MVS wasn't *that* bad.

After ten years of mediocore response time, I suppose anybody can get
used to anything. But once you get on a system with subsecond response
times (say, 1/3 second), anything over a second seems long, and when it
gets over two seconds, it seems intolerable. It all boils down to what
you get used to. _______________________________________________Gerard S.



|> when we were arguing with product group about being able have subsecond
|> response with the new 3274 display control units ... effectively TSO
|> came down on the side of the 3274 group ... since they never had
|> subsecond response ... even with the faster 3272 controller. Basically
|> 3274 group defined their target market (what is cause and what is
|> effect?) as data entry (previously done on keypunches) which don't have
|> any issues with system performance and response.

| We had second(ish) *trivial* response when we were co-located with the
| hardware. With a block editor this wasn't too bad (sure beat 2741s on
| acoustic couplers). In Bldg, 701 such performance was rather tough to do
| since the systems were all in SRL (about five miles away) and there were
| three heads per 9600bps modems. :-(
 
A

Anne & Lynn Wheeler

Anne & Lynn Wheeler said:
reference to old report with numbers comparing 3277 & 3274:
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol

from above:

hardware TSO 1sec. CMS .25sec. CMS .11sec
3272/3277 .086 1.086 .336 .196
3274/3278 .530 1.530 .78 .64

...

as an aside ... the 3722 hardware response was relatively uniform ..
while the 3274 hardware response was data sensitive ... the .53 value
was somewhat a nominal best case; the TSO 1sec was a hypothetical
avg., and the .25sec was measured avg. across large number of actual
operations ... note however, for the .11sec live system, long term
number ... it wasn't avg. ... it was 90th percentile (avg. was
actually less).

one of the (other human factors) problems with the 3274 was (keys &)
typamatic was implemented in the controller ... with fixed value (that
was one of the things that got moved out of the 3277 and back to the
controller to cut down on terminal manufactoring costs).

one of the turbo things we could do with the 3277 ... was a little
soldering inside the keyboard and choose your own typamatic delay &
rate values. We had a number of keyboards done at .1 & .1. However,
the .1 typamatic rate was faster than the screen refresh rate ...
holding down a cursor movement key would develop a time-delay lag on
the screen ... and then the cursor would appear to coast for some
period of time after you took you finger off the key. It took a litte
bit of getting use to ... in order to have a cursor stop at the exact
screen position (since the display feedback was lagging behind how
long you held the key down).
 
D

Del Cecchi

Anne & Lynn Wheeler said:
there are lots of jokes among customers about TSO being too slow to
use for interactive (even some recent threads in ibm-mainframe group).
basically TSO was slightly better than using a keypunch.

when we were arguing with product group about being able have
subsecond response with the new 3274 display control units ...
effectively TSO came down on the side of the 3274 group ... since they
never had subsecond response ... even with the faster 3272 controller.
Basically 3274 group defined their target market (what is cause and
what is effect?) as data entry (previously done on keypunches) which
don't have any issues with system performance and response.
snip

That is why fortress rochester had "MTMT" on our mvs systems, basically a
long running batch job that talked to the terminals and handled files
etc. Ran the whole lab on a 360/65 or two, later a pair of 360/85s IBM
got back from a customer who wasn't exactly satisfied.

del
 
A

Anne & Lynn Wheeler

Anne & Lynn Wheeler said:
However, when it came time to try and get my HYPERChannel device
drivers released as IBM products ... not only were there loud howls of
objection from the SNA organization ... but the POK interconnect group
came down on the side of the SNA organization. In this scenario they
were still harboring hopes that they could prevail over the SNA
organization and get out their high-speed fiber optic interconnect
... and have it available crossing the machine room boundary. The POK
interconnect group felt that the HYPERChannel interconnect was
(potentiallY) as much a longterm threat to their objectives as it was
a threat to the SNA organization's products (so it was time to circle
the wagons, forget for the moment their differences and oppose the
common enemy).

several years later I was able to ship RFC1044 support in mainframe
tcp/ip product ... but then neither the SNA organization or the
POK interconnect group really thot that TCP/IP support was much
of an issue. misc. collection 1044 postings
http://www.garlic.com/~lynn/subnetwork.html#1044
 
A

Anne & Lynn Wheeler

ref:
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design

the modified PC 3270 emulator was available with support for TCA cards
(to 3274) and called MYTE; PCCA was the 16bit ISA mainframe channel
interface card.

a reference from long ago and far away:

The demo they had with PCCA on PCNET was various host connections was
quite impressive, both terminal sessions and file transfer. Terminal
sessions supported going "both ways" ... PVM from PCDOS over PCNET to
AT with PCCA, into 370 PVM and using PVM internal net to log on
anywhere. A version of MYTE with NETBIOS support is used on the local
PC machine. They claim end-to-end date rate of only 70kbytes per
second now ... attributed to bottlenecks associated with NETBIOS
programming. They could significantly improve that with bypassing
NETBIOS and/or going to faster PC-to-PC interconnect (token ring,
ethernet, etc). However, 70kbytes/sec is still significantly better
than the 15kbytes/sec that Myte gets using TCA support thru 3274.
 
G

George Macdonald

While I agree with your sentement, there wasn't any lack of a graphics
card. The CGA card and monitor were announced and shipped with the 5150.
I had a CGA card (couldn't afford the monitor) on my "first day order"
5150. ...along with the monochrome card and monitor.

Oh come on - CGA? Most folks got a Hercules monochrome card for business
use from what I saw... until EGA came along.
OTOH, contrary to Nick's point, the 5150s did not ship with a 3270
emulator card. Those came later (IIRC IBM wasn't even first), as it
became obvious the PC was a better and cheaper 3270. Saying that the The
PC was originally intended to be a 3270 replacement is "new history", at
best.

Did what I said come out sounding like that?... sorry.
It was supposed to be. ;-) Remember, RISC was the savior, CISC was
dead-end. Oops.

There *had* been some horrible CISC machines when that was in fashion.:)
 
G

George Macdonald

If you had been closely involved, you would know what was being
planned, and would not have jumped to erroneous conclusions.
I was involved in the SAA project, in several ways, and the IBM
PS/2 was indeed intended to replace 3270s. But not in the way
that you think. You are correct that the original IBM PC was not
intended to replace 3270s, but nobody said that it was.

IBM had realised that 3270s were too limiting, and that the future
was GUIs. So IBM set up the CUA project to design a common user
access model for display stations (3270s) and GUI workstations
(PS/2s). The intent was that applications would be designed to
support both, and that the former would gradually be phased out
in favour of the latter.

Do you understand now?

Not sure who you're addressing there but you seem to be bouncing around in
a perpetual time warp.... PC, PS/2... a coupla generations apart.
The above is, of course, twaddle - as almost everyone who used a good
68K based system can witness. The immense design superiority of the
range over the x86 range allowed it to run a real operating system,
with all of the advantages that implied. The first x86 CPU that
could run more than a crippled operating system was the 80386, and
there were some significant "gotchas" with that. That is one of the
reasons that relatively few fully functional operating systems were
shipped for the x86 line until the 486 arrived. The 68K had them
from the 68000 onwards.

I used both, I ran the benchmarks - raw CPU/memory performance of the 68020
vs. 80386 just wasn't there. As for running a real operating system on
either 80386 or 68020, in that era the target was mincomputers and both
failed to measure up. I looked at several 68020 "real" systems, as well as
a NS 32032, and they just weren't good enough.
 
N

Nick Maclaren

Dunno, I used TSO for more than a decade for interactive applications.
The editor of choice was SPF, then ISPF. We used TILES for logic entry
and I even built a part number database for the management types using the
ISPF panel and table calls. TSO/MVS wasn't *that* bad.

Early TSO was.

Also, 3270s, ISPF etc. were designed and good for things like form
filling, and ghastly for text editing - it was generally felt by
those with experience of better systems that this was one one reason
that IBM code (sic) and documentation was so crude.

When I was at Santa Teresa in the 1980s, I was asked for a report
on the facilities, and made two suggestions for improvement: a
central library of IBM manuals and introducing IBM PCs instead of
3270s! Believe it or not :) When I was next there, the latter
had been done, but I can't say what influence (if any) I had had.


Regards,
Nick Maclaren.
 
A

Anne & Lynn Wheeler

gerard46 said:
After ten years of mediocore response time, I suppose anybody can
get used to anything. But once you get on a system with subsecond
response times (say, 1/3 second), anything over a second seems long,
and when it gets over two seconds, it seems intolerable. It all
boils down to what you get used
to. _______________________________________________Gerard S.

there was actually a study about human factors and response time
.... and unpredictable response time turned out to be a significant
factor ... if people got use to 2 second response time ... they would
change their behavior to accomodate the infrastructure. however, if
there was significant variability between 1 second and 3-4 seconds,
they might do something based on expecting 1 second ... and then have
to wait until it was really ready. this degraded human performance by
a factor of twice the difference between the expected and the actual.

there was an corporate conference held at the original marriott in
wash dc in the early 70s ... where Mills spoke about super programmer
and some HF group spoke on human performance and system response. they
measured a bunch of people at research ... and found that human
throughtput consistently improved with system response down until
about .25seconds. Between .25seconds and .1second it became somewhat
variable ... apparently differences between individuals. Some people
didn't notice the difference between .25 and .1 second response, and
some people would notice all the way down to .1 seconds (which was
about the best they measured in any individual). they claimed to not
find any correlation between threshold percention of different
individuals and other characteristics. I have vague recollection of
running across an article in the early 80s on study of brain synapse
propogation time and finding individual variability ... which may or
may not have any correlation with individual response perception
threshold.

it did give rise to some derogatory jokes about tso users not being
able to perceive difference between greater than second response and
subsecond response.

part of the MVS TSO issue involved MVS system structure ... and wasn't
solely TSO's fault.

sjr/bldg28 for a period had a pair of 370s all sharing the same disk
control units, a 370/168 for MVS and a 370/158 for VM370/CMS. The
operators were instructed to keep disk mounts segragated between disk
controllers identified as MVS and controllers identified as VM. The
issue is that MVS normal disk operation includes multi-track search
operations which could create solid control busy lock-up for as long
as 1/3rd second for a single operation. A controller would typically
have 16 disk drives ... and when a controller was busy in this manner,
the other 15 disk drives were unavailable during the busy period
(i.e. you might expect something like 20-40 access per second per disk
.... in this worst case scenario, things might degrade to a total of 3
access per second per controller ... total, across all 16 drives).

one day, an operator accidently mounted a "MVS" disk on a "VM"
controller/drive. within five minutes the datacenter operations were
getting phone calls from cms users howling about system response
having just gone down the drain.

besides the normal TSO characteristics not being sensitive to human
factors and system response ... the underlying MVS platform wasn't
conducive to fine-grain response. the incident was a great example
that not only didn't TSO users realize how bad it really was ... but
how the underlying MVS platform contributed to TSO not being able to
provide response (i.e. TSO users ran day in & day out ... with all MVS
disks operating with the charactistics that CMS users found totally
intollerable when subjected to just a single disk operating in that
manner).

lots of past posts about the effect of multi-track search on
response and thruput
http://www.garlic.com/~lynn/subtopic.html#dasd

note that the extensive use of multi-track search could become so bad
that it would even effect MVS shops. I got brought into a large
customer shop that didn't have any vm370 at all. It was a datacenter
for a large national retailer ... that had basically processor per
region ... but sharing common disk infrastructure. they were
experiencing random slow-downs that appeared to almost bring the whole
complex nearly to its knees. It turned out to be an issue with a large
application library partitioned data set ... that when things really
got busy during the day ... all the systems were constantly loading
members from the same library. The PDS had a 3cyliner directory and
each member load required, on the avg. a 1.5cylinder multi-track
search of the directory (taking approx. .5 seconds elapsed time).
 
D

David Hopwood

Anne said:
sjr/bldg28 for a period had a pair of 370s all sharing the same disk
control units, a 370/168 for MVS and a 370/158 for VM370/CMS. The
operators were instructed to keep disk mounts segragated between disk
controllers identified as MVS and controllers identified as VM. The
issue is that MVS normal disk operation includes multi-track search
operations which could create solid control busy lock-up for as long
as 1/3rd second for a single operation. A controller would typically
have 16 disk drives ... and when a controller was busy in this manner,
the other 15 disk drives were unavailable during the busy period
(i.e. you might expect something like 20-40 access per second per disk
... in this worst case scenario, things might degrade to a total of 3
access per second per controller ... total, across all 16 drives).

one day, an operator accidently mounted a "MVS" disk on a "VM"
controller/drive. within five minutes the datacenter operations were
getting phone calls from cms users howling about system response
having just gone down the drain.

besides the normal TSO characteristics not being sensitive to human
factors and system response ... the underlying MVS platform wasn't
conducive to fine-grain response. the incident was a great example
that not only didn't TSO users realize how bad it really was ... but
how the underlying MVS platform contributed to TSO not being able to
provide response (i.e. TSO users ran day in & day out ... with all MVS
disks operating with the charactistics that CMS users found totally
intollerable when subjected to just a single disk operating in that
manner).

This is a good example of the phenomenon described in section 1.2 of
<http://citeseer.ist.psu.edu/cooper93argument.html>.
 
A

Anne & Lynn Wheeler

gerard46 said:
After ten years of mediocore response time, I suppose anybody can
get used to anything. But once you get on a system with subsecond
response times (say, 1/3 second), anything over a second seems long,
and when it gets over two seconds, it seems intolerable. It all
boils down to what you get used
to. _______________________________________________Gerard S.

another indication about how bad TSO response was (especially
vis-a-vis CMS potentially running on identical hardware configuraiton
and effectively similar load) ... was that in the late 70s and early
80s ... some number of startups actually got VC money for doing 3274
clone controllers that featured TSO "offload".

typical CMS trivial interactive response would included some disk
operations ... even when TSO trivial interactive response was defined
to exclude operations involving disk ... there was still a significant
response issue. 3274 controller clones ... would provide front end
processing for some number of frequent TSO interactive operations (for
the connected 327x terminals) ... attempting to mask the horrible
system backend operation (and the remarkable thing was this was so
well recognized that you could even get VC money for startups doing
it). With the eventual proliferation of PCs with terminal emulation,
the front-end TSO offload (backend response masking) could be moved to
the PC ... obsoleting the need for the 3274 controller clones.

there used to be a joke in the valley that there were actually only
200 people ... they just milled around in different disguises.

One of the 3274 controller clone startups was done by somebody who had
been in the vlsi tools group in Los Gatos lab (bldg. 28) and also one
of the two people responsible for the 370 pascal compiler (originally
developed for writing of internal vlsi tools). He then went on to be
VP of software development at MIPs and at the first JAVA conference
.... showed up as general manager of the SUN group that included JAVA.
 
N

Nick Maclaren

|>
|> > After ten years of mediocore response time, I suppose anybody can
|> > get used to anything. But once you get on a system with subsecond
|> > response times (say, 1/3 second), anything over a second seems long,
|> > and when it gets over two seconds, it seems intolerable. It all
|> > boils down to what you get used
|> > to.
|>
|> there was actually a study about human factors and response time
|> ... and unpredictable response time turned out to be a significant
|> factor ...

Yes. It is very obvious, if you study yourself :)

|> ... and found that human
|> throughtput consistently improved with system response down until
|> about .25seconds. Between .25seconds and .1second it became somewhat
|> variable ... apparently differences between individuals. Some people
|> didn't notice the difference between .25 and .1 second response, and
|> some people would notice all the way down to .1 seconds (which was
|> about the best they measured in any individual). ...

Yes, but that is in response to relatively coarse interactions,
such as individual commands. A 100 millisecond delay on character
deletion is a real pain, and it makes many GUI operations (such
as drag to position) extremely stressful and slow. With such things,
the maximum delay you can tolerate without irritation is down in
the 10-20 millisecond range.

This is one reason that I stick with the Bourne shell in Unix; it
is the only one that uses cooked mode, and therefore line building
is done in the kernel. From choice, I use an environment where it
is done locally (i.e. on my desktop or equivalent) when executing
commands obeyed on a remove system.


Regards,
Nick Maclaren.
 
C

Colonel Forbin

there was actually a study about human factors and response time
... and unpredictable response time turned out to be a significant
factor ... if people got use to 2 second response time ... they would
change their behavior to accomodate the infrastructure. however, if
there was significant variability between 1 second and 3-4 seconds,
they might do something based on expecting 1 second ... and then have
to wait until it was really ready. this degraded human performance by
a factor of twice the difference between the expected and the actual.

This is something I went to great lengths to engineer around at a former
place of employment. These folk had been victims of what seems to have
been a kickback scheme between high level executives and a sleazy VAR
which left them with having paid list price for a bunch of already
obsolete HP PA-RISC gear which had every expansion slot stuffed the day
it rolled in the door. This was to support SAP R/3 SD and similar work.
In the mean time, the company went through a series of mergers and
acquisitions which escalated the user base by a factor of four above
the design target for the hardware.

Naturally, replacing "new" hardware was not a popular topic, yet the
company was bleeding profit both from inability to process the workload
as well as losing top sales personnel who were frustrated by the inability
to invoice sales in a timely and efficient fashion.

In addition to optimizing the load balance of a RAID storage solution,
I dissected the kernels of Oracle and SAP R/3 and used the HP-UX realtime
scheduler class to force preemption of user processes by important parts
of the applications kernels.

This changed the interactive behavior of the system from a "bursty"
unpredictable response time for transactions caused by deadlocks to a
smooth decay of response time which still delivered slightly subsecond
response with four times the design workload, at a Unix load average of
around 27 on a 4-way database server. The end result was predictable
latency barely within SAP best practices guidelines.

This permitted the company to continue operations while contemplating
what course of action to take. I proposed two options, either a high
RAS hardware upgrade from a different vendor, or simply service outsourcing
the whole mess. Unfortunately, I ended up leaving shortly of my
own volition after top management refused to end their love affair
with the VAR who sold them this junk at full list (remember how inflated
HP's list prices used to be).
 
B

Bill Davidsen

Nick said:
I didn't mention it, because it wasn't relevant. Back in the days
of the 80386, no serious company used an IBM PC for that! Intel's
second success was breaking into that market, but that came after
the PowerPC had failed.

Clearly the market was small, but actually the lowly XT, running UNIX
(PC/IX) was able to support a small office very well, far better than
the Z80 M/PM systems I was selling in the 70's ;-)
What those people do can't really be called conventional programming,
and quite a lot of the languages they use aren't even Turing complete
(ignoring finiteness restrictions). The conventional programming for
the "commercial" systems is done by a fairly small number of people
(e.g. the people who develop Oracle), and the vast number use those
higher-level programs.

I can witness that IBM used to regard the actual programming of even
some of the most "commercial" codes as a "scientific/technical"
activity :)

There were some pretty modern languages around, even "back when." I used
PL/1 (subset G) in the late 70's to do data analysis, and other than
compiles taking hours the resulting code was fast enough to be useful.

I do miss subset-G, one of the languages which never seemed to make it
to open source.
 
N

Nick Maclaren

Clearly the market was small, but actually the lowly XT, running UNIX
(PC/IX) was able to support a small office very well, far better than
the Z80 M/PM systems I was selling in the 70's ;-)

Well, the one I used lost characters because it couldn't keep up
with a two-fingered typist - but you may reasonably blame Willy G.
for that :)

My main point about serious companies not touching those things for
the sales, inventory, payroll etc. wasn't the performance but the
reliability. The early IBM PCs had virtually no error checking,
and were very prone to giving wrong answers with no diagnostics.


Regards,
Nick Maclaren.
 
D

Del Cecchi

snip
My main point about serious companies not touching those things for
the sales, inventory, payroll etc. wasn't the performance but the
reliability. The early IBM PCs had virtually no error checking,
and were very prone to giving wrong answers with no diagnostics.


Regards,
Nick Maclaren.

In contrast to the current PC desktop and laptop machines which have
.........virtually no error checking or diagnostics. But people love them
anyway. RAS is apparently over rated :-(

del
 
N

Nick Maclaren

In contrast to the current PC desktop and laptop machines which have
........virtually no error checking or diagnostics. But people love them
anyway. RAS is apparently over rated :-(

Very true. But even my employer doesn't run its payroll software on
such systems, and I rather suspect yours doesn't, either :)

I fully agree that few companies analyse the potential for data
corruption and loss as the information passes through the 'desktop'
systems. But the context was in that of the business/commercial
market, which is what senior managers perceive it to be.


Regards,
Nick Maclaren.
 
R

Robert Redelmeier

In comp.sys.ibm.pc.hardware.chips Nick Maclaren said:
Yes, but that is in response to relatively coarse interactions,
such as individual commands. A 100 millisecond delay on
character deletion is a real pain, and it makes many GUI
operations (such as drag to position) extremely stressful
and slow. With such things, the maximum delay you can tolerate
without irritation is down in the 10-20 millisecond range.

I've found long feedback delays (multi-second) are perfectly
acceptable so long as they occur when the human can confidently
type-ahead or is satisfied to wait (complex command executing).

Delays become irritating when the visual feedback is required
(another cursor keypress?) especially when they are inexplicable.
This is one reason that I stick with the Bourne shell in
Unix; it is the only one that uses cooked mode, and therefore
line building is done in the kernel. From choice, I use an
environment where it is done locally (i.e. on my desktop or
equivalent) when executing commands obeyed on a remove system.

I do not believe SSH has any such line-by-line protocol
as TELNET does.

-- Robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top