Will this computer run Vista adequately?

S

Stephan Rose

If those were created in a "linear" way, then the process should be
fairly "smooth" from one HD to another, tho back-and-forth thrashing
would be inevitable if the destination and target were different
volumes on the same physical HD.

Yea it was from one HD to another. Though even if I move such data on the
same HD, I have by far less head thrashing than I do under windows.
Virtually none actually...
I asked, because I was wondering about possible interface intelligence
that SCSI and perhaps S-ATA/NLQ are reputed to have. That may clump
any OS burps that may pop up during the copy ;-)

Is this a system with enough RAM to prevent paging?

I could probably turn off my swap partition and wouldn't even notice.
So yes =)
I'm pretty sure linear addressing (LBA) will fill cylinders before
clicking heads forward. A "cylinder" being the logical construct of
all tracks on all platter surfaces that can be accessed without moving
the heads - that could span physical HDs in RAID 0, for example.

Exceptions might arise when HD firmware (or OS "fixing") relocates
failing sectors (or clusters) to OK spares. Shouldn't really be an
active "feature" in a healthy setup, tho.

Very good point, I didn't even think about that.
NTFS duplicates the first few records of the MFT, and that's it. Not
sure if that includes the run start/length entries for all files,
which would be NTFS's equivalent to FAT1+FAT2.

It sounds as if Ext2/3 does something similar, depending on just how
far "defines the file system itself" takes you.

The superblock basically, first 1kb of the partition that defines how the
file system is structured.
Depends on how the NTFS was created, perhaps - i.e. formatted as NTFS,
or converted from FATxx, etc. I know there's an OEM setup parameter
to address this, which sets aside space for an eventual NTFS MFT after
the initial FATxx file system is converted later. I think this is to
cater for a "FAT32, install/image OS, convert NTFS" build method that
old build tools may require some OEMs to use.

I don't think I have converted a file system from FAT32 to NTFS since 1995 =)
Also, not everything that is "can't move" is MFT; some may be
pagefile, etc. Not sure how Diskkeeper shows this info...

It separates them into different category. MFT gets its own color.
Directory files get their own color. Pagefile get its own color. Other
non-movable stuff gets its own color.

It does a pretty nice job, I was happy with it under windows. I am still
glad I don't really need it anymore though. =)
Yup. Also, does defragging purge old entries?

No it doesn't. A defragger has no business really modifying the file
system contents itself such as shrinking the MFT in my opinion. It should
defrag and that is it. Shrinking the MFT should be done via a separate
tool in my opinion.

Not even sure if there is such a tool for NTFS.
I kinda like the old FATxx design, for one particular reason; it's
easy to preserve and re-assert the file system structure in data
recover scenarios, if you know exactly where it will be.

I'm not sure how well Ext2/3 does if the file system is trashed and needs
to be repaired. I do know this though, in Ext3 file undeletion is
impossible as it zeroes out the block pointers in the inode for
reliability reasons in the event of a crash.
Hmm... that sounds like a missed "scalability" opportunity to me.

It has major performance advantages though. Something that doesn't need to
be created doesn't take time to create it. I think in the days of hard
disks with less than 1 gigabyte there were very rare occurrences of the
Ext2 limits being reached. But today? Ext3 can handle volumes up to 32
Terabytes and the associated file volume one could expect from that much
data storage.

So why spend time constantly resizing the MFT, and subsequently introduce
a potential failure point in the file system, when there is no need to?
Defrag = "duty now for the future", where the intention is to waste
time defragging at times the user isn't trying to work. Underfootware
defragging is a Vista idea that I have yet to warm to.

Agreed, it's so nice not to need that. ;)
What's more of a problem is the thrashing that can happen when the
status of files is percieved to change. An overly-sensitive
"frequently-used" logic could cause that, so you need a bit of
hysteriesis (spelling?) to lag that from bouncing around.

Heuristics ;)
Logically, all surfaces, heads and tracks of "the disk" are considered
as one, and (hopefully) addressed in a sequence that minimizes head
travel. This logic has been present since DOS first dealt with
double-sided diskettes, where they are filled in sector, head, track
order (rather than the sector, track, head order you intuit).

My first disk system didn't have that logic; it treated each side of a
diskette as a separate disk drive! That was a home-cooked add-on for
the ZX Spectrum, which I worked with quite a bit.

Hey, get two disks for the price of one ;)

Which means if you want to concentrate travel within a narrow band,
then leaving it to defrag isn't the hot setup - better to size
partitions and volumes and apply your own control to what goes where.

Which is why I absolutely love not having drive letters under linux. =)

I could go as far as creating a set of directories, each mounted to a
different partition, to categorize my data and impose limits of how much
of said data I want to be able to store. That way I can always guarantee
there will be X amount of space in a certain directory if I have a need
for that.

Try that under windows...you'd end up with a sea of meaningless
drive letters.
Else, as far as I can see, you will always have a shlepp from the
old/fast code at the "front" to the new stuff at the "end", The fix
for that is to suck out dead stuff from the "middle" and stash it on
another volume, then kick the OS's ass until it stops fiddling with
this inactive stuff via underfootware processes.

That includes the dead weight of installation archives, backed-up old
code replaced by patches, original pre-install wads of these patches,
etc. which MS OSs are too dumb to do. A "mature" XP installation can
have 3 dead files for every 1 live one... nowhere else would that sort
of inefficiency be tolerated.

I think mine in the office is up to a ratio of 5:1 =)
Sure, fair enough. By now, one might expect apps doing "heavy things"
to spawn multiple threads, and Vista's limits on what apps can do
encourages this, i.e. splitting underfootware into parts that run as
service and as "normal app" respectively.

These days, you need a spare core just to run all the malware ;-)

Hahahaha! I like it =)
Yup, no lie there. I think the compiler will take care of some
details, as long as you separate threads in the first place.

Actually no, not really it doesn't. The responsibility that my multi
threaded code works rests all on me. The compiler gives me absolutely no
aid in that regard. All existing programming languages essentially lack
the ability to properly define a multi-threaded application. The compiler
isn't even aware that my app is multi threaded. To it, each thread is just
another function. The only thing that makes it multi threaded are the
calls I make to the OS to tell it which functions of my code I'd like to be
spawned on its own thread.

Microsoft's .Net Framework does give *some* aid in multi-threading during
runtime, such as modifying properties of some windows controls will throw
an exception if it's not thread-safe. But these are all runtime checks.
The compiler still merrily compiles it all without giving an error.
AFAIK NT's been multi-core-aware for a while, at the level of separate
CPUs at least. Hence the alternate 1CPU / multi-CPU HALs, etc.

Not sure how multiple cores within the same CPU are used, though - it
may be that the CPU's internal logic can load-balance across them, and
that this can evolve as the CPUs do. I do know that multi-core CPUs
are expected to present themselves as a single CPU to processes that
count CPUs for software licensing purposes.

It's up to the operating system to load balance the CPU. The CPU itself
can do very little in that regard. If it gets a piece of code to execute,
it can't say I am going to execute half on one core, half on the other as
the second half may depend on the results of the first half so it cannot
be executed in parallel.

There are things each core does on its own to attempt to improve
performance, but load-balancing across the cores is an operating system
task. And the best the OS can do is spread active threads across the cores.
It's been said that Vista makes better use of multiple cores than XP,
but often in posts that compare single-core "P4" core processes with
dual-core "Core 2 Duo" processors at similar GHz, without factoring in
the latter's increased efficiency per clock.

So they may in fact only be telling us what we already know, i.e. that
the new cores deliver more computational power per GHz.


No, I can see that being an example where multiple cores would help;
background pagentaion, spell-checking, and tracking line and page
breaks, for example - the stuff that makes typing feel "sticky". Not
to mention the creation of output to be sent to the printer driver,
background saves, etc. Non-destructive in-place editing can be a
challenge, and solutions often involve a lot of background logic.

Is any of that seriously an issue still though with today's processing
power on just a single core? I don't think a single letter felt "sticky"
writing this post. =P

Though I can see running things like spellcheck, and they probably already
are, on a separate thread. Not as much for performance reasons, but simply
because that is one of those programming problems where multi-threading
makes things sooooooo much easier for a change! Implementing a background
spellcheck without multi-threading would be a nightmare.

Still doesn't really need multiple cores per se though to do it unless
maybe it has a fresh 1,000 page book to chew through I suppose.
Some of that challenge applies to any event-driven UI, as opposed to
procedural or "wizard" UIs. IOW, solutions to that (which unlink each
action from the base UI module, and sanity-check states between the
various methods spawned) may also pre-solve for multi-core.


Sure. I used to "hide" some overhead by doing some processing
straight after displaying a UI, on hte basis that the user will look
at it for a while before demanding attention (this is back in the PICK
days) but folks familiar with their apps will type-ahead and catch you
out. There are all sorts of ways to go wrong here, e.g...
- spawn a dialog with a single button on it called OK
- when presses OK, start a process
- replace that button with one to Cancel
- when the process is done, report results
- relace that button with one for OK to close the dialog

Yup, I have done stuff like that before quite frequently. =)
I think true multicores may be more "transparent" that HyperThreading,
in that HT prolly can't do everything a "real" core can. So with HT,
only certain things can be shunted off to HT efficiently, whereas a
true multicore processor will have, er, multiple interchangeable cores

Bascially that's correct. =)
There's no doubt that "computing about computing" can pay off, even in
this age of already-complex CPUs. The Core 2 Duo's boost over
NetBurst at lower clock speeds is living proof of that, and frankly, I
was quite surprised by this. I thought that gum had had all its
flavor chewed out of it; witness the long-previous "slower but faster"
processors from Cyrix, and AMD's ghastly K5.

I used to have a Cyrix once!!
Isn't this pretty much what Windows' internal messaging stuff does? I
know this applies to inter-process comms, I just don't know whether
the various bits of an app are different processes at this level.
Perhaps some of this logic can be built into the standard code and UI
libraries that the app's code is built with?

Windows' internal messaging stuff is a single-threaded message
pump. Nothing more than a simple FIFO. Even in a multi threaded
application, the message pump is still a single threaded FIFO.

This becomes very evident when writing .Net based applications as many
controls will whine and moan if they are modified from a thread other than
the thread they were created on. If windows would distribute events across
multiple threads those controls would never shut up! =)
The problem is race conditions between the two processes. If you did
spawn one process as a "loose torpedo", the OS could pre-emptively
time-slice between it and it could come to pass that the two halves
wind up on different cores.

My multi threading problem actually is not really a race condition.

Currently my engine is setup as follows:

- One buffer for vertex data
- One buffer for color data

These two buffers are passed to OpenGL as vertex and color arrays
respectively as what I primarily display are just colored polygons. No
textures needed. The buffers each have a fixed size.

The rendering loop then goes to each object and queries it for its vertex
and color data, and this data is then added to the respective buffers.
When the buffers are full, they are submitted to the video card for
rendering and the engine goes to chew on the next set of object while the
video card is now busy processing the data I sent to it.

So I do have some level of multi threading here in terms of offloading
processing to the GPU.

That setup works really great for a single threaded rendering.

Multi threaded rendering though would require a much different approach.
Especially since OpenGL does not like multi threading much. All OpenGL
calls have to be made from the same thread the OpenGL context was created
on.

That means I'd need multiple geometry data buffers (one set of buffers per
thread) and each thread would need the ability to submit its geometry data
to the main application thread for submission to the GPU. At the same time
though, the loop in the main application thread has to be in such a way
that it doesn't consume CPU resources. It has to be able to go to sleep
while it waits on the other threads and it also has to instantly wake up
when another thread needs it to do something.

It's all doable and it's all a big pain in the butt to implement!!

I'd much rather try to offload more processing to the already yawning and
very bored GPU instead. But it's hard to offload memcpy, where over 90% of
my load is, to the GPU! :(
Heh heh ;-)

I think modern games are often built on engines; specifically, an
engine to do graphics and another for sound. The AI behind the
character's activities can be split off as an asynchronous thread,
whereas sound and graphics have to catch every beat.

Actually it is the exact opposite way around!

Physics and AI are on such a tight leash it isn't even funny. Especially
true for multi-player games. But even in single player games, they
generally run on a very exact and tight schedule. 30 Frames per second is
a popular number I think for AI and Physics.

Graphics and sound on the other hand can be, and usually are,
asynchronous. Rendering and sound decoding / output is all handled by
hardware anyway. All the software has to do is keep the feeding the
hardware with data before it runs out of stuff to do, especially in regard
to sound. That doesn't require any overly precise timing.

AI and Physics on the other hand, if they don't occur on a fixed framerate
then all sorts of weird and bad things happen. Especially in multiplayer
games, they absolutely have to, under all conditions, run at a fixed rate
so that all results are identical on all players systems.

Supreme commander, just as one example, runs a complete simulation of the
entire battlefield on *every* players computer and compares them to one
another. If only one player's data does not match all other player data in
any way it's considered a desync and the game is aborted.

Graphics a purely a secondary consideration. An annoying side task that
oddly enough has to be done to keep the player happy ;)
It may help at the OS level, e.g. when copying files, the av scanner
coukld be running in the "spare" core. I think this was a large part
of the thinking behind HT... also, there will be network packets to be
batted off by the firewall, etc. and that's a coreful of stuff, too.
Well, perhaps not a complete corefull, but YKWIM.

Real-time stuff like MP3 playback (something else that may be built
into games, with the compression reducing head travel) can also
benefit from having a spare core to process them.

I don't think there is a single soundcard these days that can't decode MP3
in hardware. =) CPU has absolutely nothing to do there hehe.
And then there's all the DRM trash to compute, too...

Only if you're running Vista!!


--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

I like these trans-OS discussions because they let one explore
possible ways of doing things without being chained down by the way
particular OSs actually do things :)
Yea it was from one HD to another. Though even if I move such data on the
same HD, I have by far less head thrashing than I do under windows.
Virtually none actually...

There's have to be head clicks after every buffer full of source
material has filled up, so this implies Linux is setting aside a
larger buffer (limited by a bit less than physical RAM) than Windows
does. The downside is less RAM left for anything running at the same
time, and maybe there's more underfootware in Windows.

Also, there will be a minimum unpageable footprint for the OS.

The only way to improve that best-case scenario would be to trade off
processor usage by compressing source material and expanding it when
writing to destrination. That's a bad idea where the source material
is already lossily compressed, as it will be with most things large
enough to matter (video, picture collections, archives, MP3s).
The superblock basically, first 1kb of the partition that defines how the
file system is structured.

Sounds as if it may be shallow as NTFS, then. "Hey, our file system
survived; your data files are your problem"; that mindset increasingly
pervades MS's approach to file system maintenance.
I don't think I have converted a file system from FAT32 to NTFS since 1995 =)

Your OEM may have, if you're still using their setup :)

The usual conversion killers are:
- no way to convert back from NTFS to FATxx
- if process is interrupted or fails, expect data loss
- appropriate permissions etc. are not assigned to files etc.
- 512-byte clusters because volume wasn't boundary-aligned
- loss of product activation "life" as volume serial number changed
It separates them into different category. MFT gets its own color.

OK. That's better than XP's defragger, which uses the same green for
all material that cannot be moved, forcing you to guess.
Directory files get their own color. Pagefile get its own color. Other
non-movable stuff gets its own color.

Cool! Hey, MS... did you know SVGA now supports > 16 colors?
It does a pretty nice job, I was happy with it under windows. I am still
glad I don't really need it anymore though. =)

I'd like it as a non-integrated tool, and without the underfootware
component (e.g. it could use XP's native use-tracking)
No it doesn't. A defragger has no business really modifying the file
system contents itself such as shrinking the MFT in my opinion. It should
defrag and that is it. Shrinking the MFT should be done via a separate
tool in my opinion.

OK, or perhaps a separate "Tools" thing from the defragger's UI, much
as HiJackThis hangs ADSSpy as an "extra tool".

I do, for example, expect a defragger to purge erased directory
entries, and where directories are indexed as they are in NTFS, that
would involve rebuilding those structures (at least, as I understand
it). Actually, I'd expect MFT to be cleaned up as well, especially if
the defragger boasts that it "defrags the MFT".

Defrag IS a high-risk operation that exists to make a healthy file
system more fit, and that point is not emphasised enough - i.e. it is
NOT a troubleshooting tool, any more than marathon gym training is a
diagnostic test for heart disease.

Even if a defragger "only" defrags files, the risk of disaster (crash,
corruption of what is read from disk and written back due to bad RAM,
etc.) is very much present. Doing that while avoiding the purging of
redundant structure is like jumping out of a plane, but wearing safety
glasses to protexct your eyes on landing.

Then again, this fits the "file system's OK, who cares about your
data" vendor-vision thing I mentioned earlier.
Not even sure if there is such a tool for NTFS.

Yup. MS haven't bothered to do much to maintain or recover NTFS, and
because they keep it a proprietary, moving target, no-one else can do
it either. Doing so is hi-risk the file system could change in a
blink of an SP or patch, before you've recouped your investment.
I'm not sure how well Ext2/3 does if the file system is trashed and needs
to be repaired. I do know this though, in Ext3 file undeletion is
impossible as it zeroes out the block pointers in the inode for
reliability reasons in the event of a crash.

Hm. It's one thing to consider (as MS seems to do) the user as alone,
another to consider the user's system as including techs who can fix
things. For example, you might consider disk compression to be
low-risk (as MS did) according to the first model (where if it blinks,
the user's dead anyway) but not the second.

MS talks to us techs about "partnership", but it's largely hot air
when it comes to delivering better value to end users. It only goes
as far as "partnering" to sell more stuff.

This changes in developer and professional network administration,
where genuine technical partnership is common and effective. It's
just the consumers who get left out in the cold, because the nadir of
large "royalty OEMs" is taken as the acceptable baseline.

It's pathetic.
It has major performance advantages though. Something that doesn't need to
be created doesn't take time to create it.

This is true, I guess... same thinking as defragging, really; "duty
now for the future". It's just that once yo create all those empty
file structure items, you're obliged to maintain them, if you intend
to use them without sanity-checking them first.

If you do sanity-check them first, then that's prolly the same sort of
overhead as creating them (tho noting reads are "cheaper" than writes,
which applies to HDs and flash for different reasons)
So why spend time constantly resizing the MFT, and subsequently introduce
a potential failure point in the file system, when there is no need to?

I wouldn't resize MFT on the fly, as Vista tries to do with defrag;
jars with the concept of "the OS should initiate no risky activities
unless these were initiated by the user". In that sense, I'd defrag
and purge-out MFT when defragging, and I'd defrag only when this is
explicitly initiated by the user.

Being old-school, I'd prolly do it outside of multitasking "business
as usual", either on a lock-down, defrag, shutdown basis or on a
restart, defrag, initiate multitasking basis. The latter's more
common (e.g. the way the OS does ChkDsk /F or /R for C:) but in some
ways I prefer the former. Why not support both?
Heuristics ;)

No, I was thinking of.. gah, let's find it...

http://en.wikipedia.org/wiki/Hysteresis

For example, let's say you define files < 50k to be put in one area of
disk, and > 50k to be put in another area of disk.

In that case, you'd expect thrashing if files moved across the 50k
line in either direction.

So you might deliberately make it "laggy", i.e. if < 10k it goes in
one place, and if > 100k, it goes in another place, and if it's
anywhere between, you just leave it as it is.

And/or, you can apply logic, such as "files that grow larger than the
limit are unlikely to shrink and stay below the limit". In which
case, files that grow over > 100k get moved, but once moved, they're
tagged as being not for relocation even if they do shrink below 10k

This is basically an expert system, i.e. you are trying to teach the
system to think. But *we* already think, so why not take advantage of
that? A user who knows a certain bunch of files will never change in
size and will seldom be accessed, san simply whisk these out of C: and
put them on a distant logical volume, out of the way.

It's the same sort of efficiency as pre-compiling 3D scenes on a
powerful workstation so that consumer PCs can "render" them without
having to rebuild them from first principles.
Hey, get two disks for the price of one ;)

Yep - allowed a mix of single-sided and double-sided drives to be used
in similar ways, but it lost the efficiency of dual heads. We was po,
in those days; each CPU clock tick was palpable...
Which is why I absolutely love not having drive letters under linux. =)

I could go as far as creating a set of directories, each mounted to a
different partition, to categorize my data and impose limits of how much
of said data I want to be able to store. That way I can always guarantee
there will be X amount of space in a certain directory if I have a need
for that.

Try that under windows...you'd end up with a sea of meaningless
drive letters.

The NTFS approach would prolly be to use different volumes and then
hide this behind junctions that route automatically, as if the "drive
letter" was mounted at a point in the directory tree.

There's also a quota system of sorts, but I don't think it's flexible
in terms of what can be quota'd.

There's a fundamental "target audience" thing here. Windows could be
like Linux if it was developed purely for developers and technerati,
because we'd love to roll up our sleeves and create custom
CLSID-mediated namespace items with our choice of particular
behaviors. Then the best of these could be rolled back into consumer
space as the "canned" items we see today. Hmm.

In a way, this is how my use of partitions evolved towards the
4-volume model I currently use. Recurring themes kept coming up:
- stuff that needed fast access (C:)
- stuff that was important, but rarely used (F:)
- stuff that was small, crucial, and often used (D:)
- other stuff, and masses of it (E:)

In the above, "crucial" means "don't stick it on C: if you really want
to see it again", as C: will always be a charnel house of temp writes,
paging, and other crash-o-matics banging away. Get outta there.
Actually no, not really it doesn't. The responsibility that my multi
threaded code works rests all on me. The compiler gives me absolutely no
aid in that regard. All existing programming languages essentially lack
the ability to properly define a multi-threaded application.

Still? Wow, that's a bit retro... then again, sware always lags
behind opportunities offered by hardware. I remember reading about
hypervisons pre-emptively time-slicing apps in their protected spaces
in the closing chapers of a book on the 386... was a long time coming,
at least in Microsoft-land (I think UNIX took the lead there).
The compiler isn't even aware that my app is multi threaded. To it,
each thread is just another function.

Does it not treat each function as potentially running in parallel
with everything else? It should do... old-fashioned phrases like
"re-entrant" come to mind at this point.
The only thing that makes it multi threaded are the calls I make
to the OS to tell it which functions of my code I'd like to be
spawned on its own thread.

OK.

Moving from explicit multi-processor to implicit multi-processing
(multiple cores, out-of-order execution etc.) changes things to the
point that modern development tools should expect random pulti-core
parallelism as a given.

For example, the old NT licensing approach was; 1 core for "cheap"
licenses, multi-core for "expensive" licenses. To move from one to
the other (and to harness all cores), you'd have to do the NT
equivalent of recompiling the kernel; change the HAL.

XP Home is 1-processor, Pro is multi, from a licensing perspective.
But AFAIK, all multi-core processors are to be seen as single CPUs,
and between the PIII and P4 era, the trend switched away from
multi-processor motherboards to multi-core processors.
Microsoft's .Net Framework does give *some* aid in multi-threading during
runtime, such as modifying properties of some windows controls will throw
an exception if it's not thread-safe. But these are all runtime checks.
The compiler still merrily compiles it all without giving an error.

Hmm. By "compiler", I really mean the whole dev environment, from
static and runtime libraries to the way base classes are defined. The
actual compiler is more where processor compatibility battles are
fought, i.e. what branch weighting and cache size to optimize for, do
we use extra opcodes, are we OK for out-of-order execution, etc.
It's up to the operating system to load balance the CPU. The CPU itself
can do very little in that regard. If it gets a piece of code to execute,
it can't say I am going to execute half on one core, half on the other as
the second half may depend on the results of the first half so it cannot
be executed in parallel.

That's what I mean - this sort of on-the-fly "leave it to us"
how-it's-done logic has been moving down the line - from assemby
programmer to compiler to CPU microcode - for a while now, starting
with the original Pentium's super-scalar pipelines.

Deciding which instruction pairs to split across two pipelines
(without setbacks from cross-dependencies) is the start of the same
sort of logic that may split code between multiple cores.

In some ways, it may even be easier with true cores, as at least they
are equally capable, so you don't have to think "special case"
limitations. I grew up with a CPU that was not like this at all;
sure, there were A, Flags, B, C, H, L, IX-h, IX-l, IY-h, IY-l, SP, I
and R registers, but they were all different in terms of what they
could do. Some weren't even a full 8 bits ;-)

In the case of the Z80, you were expected to use certain registers in
certain ways, i.e. IY was supposed to be used as an address index base
register (and had opcodes optimized for this, being "expensive" to use
otherwise) etc. but if you need a bunch of separate 8-bit registers
now, you could use undocumented instructions to use the halves of
these registers as such. Useful when you want to preserve all of RAM.

Modern CPUs, as I understand it, aim to be able to do anything with
any register with equal economy - in fact, you aren't supposed tio be
concerned with what actual register is used for what. First, the
compiler took that load of your shoulders if you switched from
Assembler to C, then the processor took it over.
There are things each core does on its own to attempt to improve
performance, but load-balancing across the cores is an operating system
task. And the best the OS can do is spread active threads across the cores.

OK. Wasn't sure what point in the evolution we'd reached.

So, what's likely to be better for a budget ("I just want to do email,
write letters, look at photos") PC; a single-core 2GHz CPU, or a
dual-core 1.6GHz CPU, with Vista-32 Basic as the target OS?

That ain't a rhetorical question; next week I build such things.
Is any of that seriously an issue still though with today's processing
power on just a single core? I don't think a single letter felt "sticky"
writing this post.

It used to be a differentiator around the Win95 days, when MS Office
was said to leverage undocumented OS calls for unfair advantage.
Other word processors (Word Pervert, Lotus SmartSuite, etc.) often
felt "laggy" when typing.

If you use Word right now, and open a non-trivial document, you will
see the status bar's page and line count will be updating itself as
you work in the document, for quite a few minutes sometimes. I take
that as a sign that the pagenation is handed off to a background
thread. For another example, you can work in Eudora while it is
simultaneously pulling and sending mail from differnt email accounts
(which can give integrated av the blue heebies, heh). More threads?

Unlike some Vista betas, when you pull up a new folder and Vista's
Explorer does the silly "looking at stuff" thing, with the green
progress slug crawling behind the bread-crumbs, you can at least work
within the window. Same as when XP pulls up content from a
freshly-detected stick; you can click "Explore" from the suggestion
box without waiting for the grope to complete, at which point the
grope is cancelled. Different threads? I'd hope so.
Still doesn't really need multiple cores per se though to do it unless
maybe it has a fresh 1,000 page book to chew through I suppose.

I think if I were starting a project that I expected to take 3 years
to complete, I'd assume multi-core availability and go multi-thread,
especially if I had integrated stuff leeching off my code. I'd try
and toss those leeches onto another core, and make no assumptions
about what they haven't done to whatever they hook into.

Times like this, I'm glad I'm off that "programmer" treadmill ;-)
I used to have a Cyrix once!!

The thing that put me off the whole "trust me, it's faster even though
it's slower" is that it was always such a disclaimer'd YMMV thing.

The K5 was a low point, though. Not only was it slow as the Pentium
core it was supposed to outperform, but the complexity of the core
meant it couldn't be clocked up.

Worse, it was the only CPU where I've experienced different runtime
behaviour (e.g. "fonts look funny", some graphics appear at the wrong
size or not at all, etc.) purely because a different CPU was in the
PC. This is aside from expected "crashes because timing assumptions
were invalidated" thing that happens far more often.

In any case, AMD's own DX4-133 ate it alive, running from a cheaper
486DXn motherboard. So AMD cancelled the DX4-150 (it would have been
embarrasing; it's "ghost" remains in motherboard manuals and jumper
settings, tho), and tossed out the K5 team in favor of the NexGen team
they bought who would in turn develop the K6.

Perhaps Intel's undergoing a similar team swap, what with the mobile
team taking over desktop projects from the NetBurst folks?
Windows' internal messaging stuff is a single-threaded message
pump. Nothing more than a simple FIFO. Even in a multi threaded
application, the message pump is still a single threaded FIFO.

Hmm, I see the problem. This is what would kill OS/2's claims to
"uncrashability"; geek appologists would say "it hasn't crashed, it's
still running, it's just that you can't control it because the
keyboard/mouse input queue's fallen over.".

Hmm, why were we so NOT reassured ;-)
This becomes very evident when writing .Net based applications as many
controls will whine and moan if they are modified from a thread other than
the thread they were created on. If windows would distribute events across
multiple threads those controls would never shut up! =)

I guess the take-home here is "we aren't ready". So would you go so
far as to consider extra cores a waste of time, for general use?

When do you expect to ship?
My multi threading problem actually is not really a race condition.

OK. Those those issues find you, as a rule :-/
Currently my engine is setup as follows:

- One buffer for vertex data
- One buffer for color data

These two buffers are passed to OpenGL as vertex and color arrays
respectively as what I primarily display are just colored polygons. No
textures needed. The buffers each have a fixed size.

The rendering loop then goes to each object and queries it for its vertex
and color data, and this data is then added to the respective buffers.
When the buffers are full, they are submitted to the video card for
rendering and the engine goes to chew on the next set of object while the
video card is now busy processing the data I sent to it.

So I do have some level of multi threading here in terms of offloading
processing to the GPU.

Yup. Do you fit with the option to render in synch with frame paint?
Multi threaded rendering though would require a much different approach.
Especially since OpenGL does not like multi threading much. All OpenGL
calls have to be made from the same thread the OpenGL context was created
on.

OK. I know OpenGL's being/been depreciated in Windows as DirectX
takes off, but that won't matter if this is for Linux, and/or already
targetting hi-end PCs optized for OpenGL (tho that may be a sinking
ship in the medium-long term, i.e. if shipping years from now).
That means I'd need multiple geometry data buffers (one set of buffers per
thread) and each thread would need the ability to submit its geometry data
to the main application thread for submission to the GPU. At the same time
though, the loop in the main application thread has to be in such a way
that it doesn't consume CPU resources. It has to be able to go to sleep
while it waits on the other threads and it also has to instantly wake up
when another thread needs it to do something.

It's all doable and it's all a big pain in the butt to implement!!

:)

There's the matter of how to distribute the load across n threads (or
"whatevers", if you abstract the problem a bit further. PICK used to
use id-hashing to distribute records across a number of pre-allocated
"data buckets", and MS may do something similar when splitting web
cache items across random-named directories.

PICK uses prime number modulos as they clump hashed items less than
the multiples of 2 that MS uses for IE's cache branches. Then again,
in the NTFS age, MS may not be hashing across random-named directories
for speed, but to break predictable paths as a malware protection.
I'd much rather try to offload more processing to the already yawning and
very bored GPU instead. But it's hard to offload memcpy, where over 90% of
my load is, to the GPU! :(

Is memcpy related to strcpy, as in "in null terminators we trust"?

Watch that; if you find Core 2 Duo is running things slower than the
older CPU architecture and you're due to ship to an all-Duo
marketspace, you could be performance-under-competitive :)
Actually it is the exact opposite way around!

Physics and AI are on such a tight leash it isn't even funny. Especially
true for multi-player games. But even in single player games, they
generally run on a very exact and tight schedule. 30 Frames per second is
a popular number I think for AI and Physics.

OK... that's interesting, and as you say, unexpected...
Graphics and sound on the other hand can be, and usually are,
asynchronous.

Or rather, locked into some sort of real-time synch?
Rendering and sound decoding / output is all handled by
hardware anyway. All the software has to do is keep the feeding the
hardware with data before it runs out of stuff to do, especially in regard
to sound. That doesn't require any overly precise timing.

OK. These days, processing is fast enough to "think" between wave
crests of sound waves, so it makes sense... in ye olde days, we'd have
to prio up for screen refresh, nowdays you can prolly cram a lot of
"thinking" between such realtime demands.
AI and Physics on the other hand, if they don't occur on a fixed framerate
then all sorts of weird and bad things happen. Especially in multiplayer
games, they absolutely have to, under all conditions, run at a fixed rate
so that all results are identical on all players systems.

Hmm... "results are identical on all players systems" could be a
saleability limitation; surely it would be good to say "the game plays
smoothly on slow PCs, but gives a tougher game on faster PCs that
allow us to run deeper AI"?
Supreme commander, just as one example, runs a complete simulation of the
entire battlefield on *every* players computer and compares them to one
another. If only one player's data does not match all other player data in
any way it's considered a desync and the game is aborted.

Ah, that's the multiplayer thing. The slow links between these
systems would also shape the way things have to be done.

Cool! Though the tradeoff may be to limit compatibility to systems
with "enough" graphic memory, maybe? Or does AGP and successors
successfully blur such limits, the way paging hides RAM limits?
I don't think there is a single soundcard these days that can't decode MP3
in hardware. =) CPU has absolutely nothing to do there hehe.

Is it? Well, that's good news. What about integrated sound? It's
very rare that I see a stand-alone sound card these days, even in PCs
built for gaming. It's only really the sound studio stuff that needs
"special" sound for ASIO performance, independent multiple inputs and
outputs, and better S/N from getting the sound guts out of the case.
Only if you're running Vista!!

Or having to "support" such titles. Interesting thought; as Linux
suggests itself for embedded/hidden use within dedicated "black box"
devices, is it tempted to pitch as controller OS for set-top players?


-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...
 
S

Stephan Rose

I like these trans-OS discussions because they let one explore
possible ways of doing things without being chained down by the way
particular OSs actually do things :)

Agreed =)
There's have to be head clicks after every buffer full of source
material has filled up, so this implies Linux is setting aside a
larger buffer (limited by a bit less than physical RAM) than Windows
does. The downside is less RAM left for anything running at the same
time, and maybe there's more underfootware in Windows.

Well Linux handles this in a very nice way. It uses almost up to 95% of
all free memory for the file system cache. However, unlike windows, memory
it uses for the file system cache is NOT taken away from applications. So
for instance, my system might be looking like this:

2 gigs total RAM

300 megs in use
1.7 gigs free
1.6 gigs cached.

the "1.6 gigs" in use by the cache don't even show up under
the normal memory statistics. So the memory, even though it is used by the
system cache, is still available to applications as free memory. The cache
contents are then just simply discarded when an application needs the
memory for itself. Works really well in practice.
Also, there will be a minimum unpageable footprint for the OS.

It's funny how XP's OS footprint alone is roughly the size of my entire RAM
uses with all my applications I use loaded. =)
The only way to improve that best-case scenario would be to trade off
processor usage by compressing source material and expanding it when
writing to destrination. That's a bad idea where the source material
is already lossily compressed, as it will be with most things large
enough to matter (video, picture collections, archives, MP3s).

Yea and the cpu usage would hurt on cpu intensive operations. Now if the
compression / decompression was done in hardware...there might be some
gains. But I think it'd probably be more trouble than it's worth.
Sounds as if it may be shallow as NTFS, then. "Hey, our file system
survived; your data files are your problem"; that mindset increasingly
pervades MS's approach to file system maintenance.

Well the problem with that is that a file system that can withstand any
extended amount of corruption is just not feasible yet. Not even with
todays processors. Not if you actually want to be able to load and save
files in under 5 minutes.

Mirroring the data on the file system is unfeasible. It'd take way too
much space. So a better space saving approach is parity. However, parity
calculations are expensive and take time. That is why software based raid
solutions don't do it.

If you want, for example, a striped raid setup with parity that could
sustain multiple disk failure, you need a hardware controller to do it as
it'd be impossible to get any decent amount of performance if it had to be
done in software.

The best way in my opinion to ensure security is to have multiple copies
of the data on multiple computers in physically different locations. And
by that I don't mean different spots of the same room. =)

All my important data is present both on two of my computers as well as a
server located in a remote location. All this is managed via source
control so it happens transparently in the background and doesn't cost me
any time.
Your OEM may have, if you're still using their setup :)

OEM!?!?? Come on now!!! You don't think I'd ever use an OEM PC now do you?
How could you!!! ;)

All my PCs are self built. The only OEM I use is myself. =)
The usual conversion killers are:
- no way to convert back from NTFS to FATxx
- if process is interrupted or fails, expect data loss
- appropriate permissions etc. are not assigned to files etc.
- 512-byte clusters because volume wasn't boundary-aligned
- loss of product activation "life" as volume serial number changed

I knew there was at least one reason why I don't convert file systems. =)

That is actually one thing Linux does right...yet again...over MS.

Converting an Ext2 volume to Ext3 is as easy as mounting it as Ext3. No
conversion actually has to be done...Just an Ext3 volume can't be mounted
as Ext2 if Ext3 journal data is still present. It is however fully
backwards compatible to Ext2 if you first make sure all journal entries
have been processed and the journal is empty.

Same goes for Ext4...you can mount an Ext3 volume as Ext4 with no issues
or conversion necessary. Though going back to Ext3 is a little more
difficult there as Ext4 as it introduces some new things into the file
system called Extents that further reduce, if not totally eliminate, any
fragmentation.
OK. That's better than XP's defragger, which uses the same green for
all material that cannot be moved, forcing you to guess.


Cool! Hey, MS... did you know SVGA now supports > 16 colors?

ROFL! Good one =)
I'd like it as a non-integrated tool, and without the underfootware
component (e.g. it could use XP's native use-tracking)


OK, or perhaps a separate "Tools" thing from the defragger's UI, much
as HiJackThis hangs ADSSpy as an "extra tool".

I do, for example, expect a defragger to purge erased directory
entries, and where directories are indexed as they are in NTFS, that
would involve rebuilding those structures (at least, as I understand
it). Actually, I'd expect MFT to be cleaned up as well, especially if
the defragger boasts that it "defrags the MFT".

Defrag IS a high-risk operation that exists to make a healthy file
system more fit, and that point is not emphasised enough - i.e. it is
NOT a troubleshooting tool, any more than marathon gym training is a
diagnostic test for heart disease.

Even if a defragger "only" defrags files, the risk of disaster (crash,
corruption of what is read from disk and written back due to bad RAM,
etc.) is very much present. Doing that while avoiding the purging of
redundant structure is like jumping out of a plane, but wearing safety
glasses to protexct your eyes on landing.

Then again, this fits the "file system's OK, who cares about your
data" vendor-vision thing I mentioned earlier.

True enough, I can see what you are saying. Then again though, how much do
you really gain from purging currently unused MFT entries? A megabyte?
Maybe?

I suppose if you took a 300 GB volume and resized it to 50 GB then purging
the MFT would be really beneficial but at that point in time, I'd kind of
expect the resize tool to do it.
Yup. MS haven't bothered to do much to maintain or recover NTFS, and
because they keep it a proprietary, moving target, no-one else can do
it either. Doing so is hi-risk the file system could change in a
blink of an SP or patch, before you've recouped your investment.

Yup that is the problem with NTFS support from Linux. I can read NTFS
volumes perfectly fine but writing can be dangerous so NTFS volumes are
generally mounted read-only. There is supposed to be a NTFS linux driver
now that supposedly has 100% safe NTFS compatibility but I have never
tried it.
Hm. It's one thing to consider (as MS seems to do) the user as alone,
another to consider the user's system as including techs who can fix
things. For example, you might consider disk compression to be
low-risk (as MS did) according to the first model (where if it blinks,
the user's dead anyway) but not the second.

MS talks to us techs about "partnership", but it's largely hot air
when it comes to delivering better value to end users. It only goes
as far as "partnering" to sell more stuff.
Agreed.


This changes in developer and professional network administration,
where genuine technical partnership is common and effective. It's
just the consumers who get left out in the cold, because the nadir of
large "royalty OEMs" is taken as the acceptable baseline.

It's pathetic.

Agreed again =)
This is true, I guess... same thinking as defragging, really; "duty
now for the future". It's just that once yo create all those empty
file structure items, you're obliged to maintain them, if you intend
to use them without sanity-checking them first.

If you do sanity-check them first, then that's prolly the same sort of
overhead as creating them (tho noting reads are "cheaper" than writes,
which applies to HDs and flash for different reasons)

Not really more overhead than NTFS sanity checking preallocated or old
nodes that are no longer in use from deleted files. =)
I wouldn't resize MFT on the fly, as Vista tries to do with defrag;
jars with the concept of "the OS should initiate no risky activities
unless these were initiated by the user". In that sense, I'd defrag
and purge-out MFT when defragging, and I'd defrag only when this is
explicitly initiated by the user.

Being old-school, I'd prolly do it outside of multitasking "business
as usual", either on a lock-down, defrag, shutdown basis or on a
restart, defrag, initiate multitasking basis. The latter's more
common (e.g. the way the OS does ChkDsk /F or /R for C:) but in some
ways I prefer the former. Why not support both?


No, I was thinking of.. gah, let's find it...

http://en.wikipedia.org/wiki/Hysteresis

For example, let's say you define files < 50k to be put in one area of
disk, and > 50k to be put in another area of disk.

In that case, you'd expect thrashing if files moved across the 50k
line in either direction.

So you might deliberately make it "laggy", i.e. if < 10k it goes in
one place, and if > 100k, it goes in another place, and if it's
anywhere between, you just leave it as it is.

And/or, you can apply logic, such as "files that grow larger than the
limit are unlikely to shrink and stay below the limit". In which
case, files that grow over > 100k get moved, but once moved, they're
tagged as being not for relocation even if they do shrink below 10k

This is basically an expert system, i.e. you are trying to teach the
system to think. But *we* already think, so why not take advantage of
that? A user who knows a certain bunch of files will never change in
size and will seldom be accessed, san simply whisk these out of C: and
put them on a distant logical volume, out of the way.

It's the same sort of efficiency as pre-compiling 3D scenes on a
powerful workstation so that consumer PCs can "render" them without
having to rebuild them from first principles.

I see what you are saying but what does grouping files by size really
accomplish? I personally use a mix of large and small files so am not sure
what the advantage would be of grouping them in different areas of the
disk based on their size. Really though I think what needs to happen more
than anything else is that we get away from spinning disks and
mechanically moving heads!! Flash based systems don't suffer *any* of
those concerns and couldn't care less where a file is located.

Now if the stupid things were easier to produce in 300gb sizes and
wouldn't die so quick...
Yep - allowed a mix of single-sided and double-sided drives to be used
in similar ways, but it lost the efficiency of dual heads. We was po,
in those days; each CPU clock tick was palpable...


The NTFS approach would prolly be to use different volumes and then
hide this behind junctions that route automatically, as if the "drive
letter" was mounted at a point in the directory tree.

There's also a quota system of sorts, but I don't think it's flexible
in terms of what can be quota'd.

There's a fundamental "target audience" thing here. Windows could be
like Linux if it was developed purely for developers and technerati,
because we'd love to roll up our sleeves and create custom
CLSID-mediated namespace items with our choice of particular
behaviors. Then the best of these could be rolled back into consumer
space as the "canned" items we see today. Hmm.

In a way, this is how my use of partitions evolved towards the
4-volume model I currently use. Recurring themes kept coming up:
- stuff that needed fast access (C:)
- stuff that was important, but rarely used (F:)
- stuff that was small, crucial, and often used (D:)
- other stuff, and masses of it (E:)

In the above, "crucial" means "don't stick it on C: if you really want
to see it again", as C: will always be a charnel house of temp writes,
paging, and other crash-o-matics banging away. Get outta there.

Similarly to how my setup is actually. Just without the meaningless drive
letters. ;)
Still? Wow, that's a bit retro... then again, sware always lags
behind opportunities offered by hardware. I remember reading about
hypervisons pre-emptively time-slicing apps in their protected spaces
in the closing chapers of a book on the 386... was a long time coming,
at least in Microsoft-land (I think UNIX took the lead there).


Does it not treat each function as potentially running in parallel
with everything else? It should do... old-fashioned phrases like
"re-entrant" come to mind at this point.

Not entirely. Compilers these days are smart to where they will try to
combine functions into one if it makes sense to avoid unnecessary function
calls. So smaller functions are often inlined into larger functions.

Of course if a function is recursive, the compiler sees that and does not
try to inline it. So re-entrant functions aren't a problem. =)

However ultimately, if I want to write a function that is multi-thread
safe, it is still my responsibility to make sure it actually is.
OK.

Moving from explicit multi-processor to implicit multi-processing
(multiple cores, out-of-order execution etc.) changes things to the
point that modern development tools should expect random pulti-core
parallelism as a given.

For example, the old NT licensing approach was; 1 core for "cheap"
licenses, multi-core for "expensive" licenses. To move from one to
the other (and to harness all cores), you'd have to do the NT
equivalent of recompiling the kernel; change the HAL.

XP Home is 1-processor, Pro is multi, from a licensing perspective.
But AFAIK, all multi-core processors are to be seen as single CPUs,
and between the PIII and P4 era, the trend switched away from
multi-processor motherboards to multi-core processors.


Hmm. By "compiler", I really mean the whole dev environment, from
static and runtime libraries to the way base classes are defined. The
actual compiler is more where processor compatibility battles are
fought, i.e. what branch weighting and cache size to optimize for, do
we use extra opcodes, are we OK for out-of-order execution, etc.


That's what I mean - this sort of on-the-fly "leave it to us"
how-it's-done logic has been moving down the line - from assemby
programmer to compiler to CPU microcode - for a while now, starting
with the original Pentium's super-scalar pipelines.

Deciding which instruction pairs to split across two pipelines
(without setbacks from cross-dependencies) is the start of the same
sort of logic that may split code between multiple cores.

Not sure how much of this, if any, the x86 processor does. But the arm
processors I work with actually have some parallel code execution ability
built in. In some cases where two subsequent instructions don't access the
same registers they can execute in parallel. It makes a huuuuge speed
difference too. I've really managed to optimize some functions that way.
In some ways, it may even be easier with true cores, as at least they
are equally capable, so you don't have to think "special case"
limitations. I grew up with a CPU that was not like this at all;
sure, there were A, Flags, B, C, H, L, IX-h, IX-l, IY-h, IY-l, SP, I
and R registers, but they were all different in terms of what they
could do. Some weren't even a full 8 bits ;-)

In the case of the Z80, you were expected to use certain registers in
certain ways, i.e. IY was supposed to be used as an address index base
register (and had opcodes optimized for this, being "expensive" to use
otherwise) etc. but if you need a bunch of separate 8-bit registers
now, you could use undocumented instructions to use the halves of
these registers as such. Useful when you want to preserve all of RAM.

Modern CPUs, as I understand it, aim to be able to do anything with
any register with equal economy - in fact, you aren't supposed tio be
concerned with what actual register is used for what. First, the
compiler took that load of your shoulders if you switched from
Assembler to C, then the processor took it over.

For the most part this holds true for the x86 processor. Though you still
have special purpose registers for floating point, SSE, MMX, and so on.

Arm processors still use specialized registers in their regular set of
registers. r0-r3 are usually scratch registers meaning they can be used in
a function without backing up their state and r0 is usually the return
register if a function returns a value. r4-r11 are registers for function
use that have to maintain state between function calls, so if a function
needs to use it, it needs to preserve its state on the stack first. Then
r12-r14 my memory is a bit hazy. I know one is the link register
containing the return address, another is the stack pointer. And then r15
is the program counter.

So specialized registers are still alive and well. =) Though not so much
in the PC world hehe.
OK. Wasn't sure what point in the evolution we'd reached.

So, what's likely to be better for a budget ("I just want to do email,
write letters, look at photos") PC; a single-core 2GHz CPU, or a
dual-core 1.6GHz CPU, with Vista-32 Basic as the target OS?

That ain't a rhetorical question; next week I build such things.

I'd probably go for a dual core due to all the crap you need to run on a
Vista box in the background in an attempt to keep the malware out of it.
Plus the malware once it does finally get in would like a core too. =)

Plus what is the price difference between the two processor? Can't be that
much more for the dual core is it?
It used to be a differentiator around the Win95 days, when MS Office
was said to leverage undocumented OS calls for unfair advantage.
Other word processors (Word Pervert, Lotus SmartSuite, etc.) often
felt "laggy" when typing.

If you use Word right now, and open a non-trivial document, you will
see the status bar's page and line count will be updating itself as
you work in the document, for quite a few minutes sometimes. I take
that as a sign that the pagenation is handed off to a background
thread. For another example, you can work in Eudora while it is
simultaneously pulling and sending mail from differnt email accounts
(which can give integrated av the blue heebies, heh). More threads?

Unlike some Vista betas, when you pull up a new folder and Vista's
Explorer does the silly "looking at stuff" thing, with the green
progress slug crawling behind the bread-crumbs, you can at least work
within the window. Same as when XP pulls up content from a
freshly-detected stick; you can click "Explore" from the suggestion
box without waiting for the grope to complete, at which point the
grope is cancelled. Different threads? I'd hope so.

Yea all such stuff is run on different threads. Trying to run stuff like
that on the main thread would be just horrid.
I think if I were starting a project that I expected to take 3 years
to complete, I'd assume multi-core availability and go multi-thread,
especially if I had integrated stuff leeching off my code. I'd try
and toss those leeches onto another core, and make no assumptions
about what they haven't done to whatever they hook into.

Times like this, I'm glad I'm off that "programmer" treadmill ;-)

Times like this, I sometimes would be glad if I was too haha. =)
The thing that put me off the whole "trust me, it's faster even though
it's slower" is that it was always such a disclaimer'd YMMV thing.

The K5 was a low point, though. Not only was it slow as the Pentium
core it was supposed to outperform, but the complexity of the core
meant it couldn't be clocked up.

Worse, it was the only CPU where I've experienced different runtime
behaviour (e.g. "fonts look funny", some graphics appear at the wrong
size or not at all, etc.) purely because a different CPU was in the
PC. This is aside from expected "crashes because timing assumptions
were invalidated" thing that happens far more often.

In any case, AMD's own DX4-133 ate it alive, running from a cheaper
486DXn motherboard. So AMD cancelled the DX4-150 (it would have been
embarrasing; it's "ghost" remains in motherboard manuals and jumper
settings, tho), and tossed out the K5 team in favor of the NexGen team
they bought who would in turn develop the K6.

Perhaps Intel's undergoing a similar team swap, what with the mobile
team taking over desktop projects from the NetBurst folks?

Possible, all I gotta say is, whatever they did on the Core 2 Duo...it
definitely worked!
Hmm, I see the problem. This is what would kill OS/2's claims to
"uncrashability"; geek appologists would say "it hasn't crashed, it's
still running, it's just that you can't control it because the
keyboard/mouse input queue's fallen over.".

Hmm, why were we so NOT reassured ;-)

Hahaha =)
I guess the take-home here is "we aren't ready". So would you go so
far as to consider extra cores a waste of time, for general use?

On a Linux system? Yes.
On a Vista system? May I have more cores please?

From an application standpoint, more than 1 core is a waste for most
applications. Nobody that uses their computer for standard e-mail,
web, basic office products is going to see much benefit from multiple
cores.

From an OS standpoint though, Vista is such a hog that the more
cores the better due to its love for dozens of background processes.
When do you expect to ship?

I am aiming for later on this year. It's hard to set a date right now as
this is a project I work on in my "spare" time. So how much time a week I
can devote to it depends largely on my other workloads.
OK. Those those issues find you, as a rule :-/


Yup. Do you fit with the option to render in synch with frame paint?

You mean vertical sync? No I don't worry about vertical sync much. I
actually have very few screen updates. My rendering is actually tied to
the OnPaint event which only gets triggered when data actually changes or
the OS feels the window needs to be redrawn (ie if some window that
previously obscured part of it no longer does).
OK. I know OpenGL's being/been depreciated in Windows as DirectX
takes off, but that won't matter if this is for Linux, and/or already
targetting hi-end PCs optized for OpenGL (tho that may be a sinking
ship in the medium-long term, i.e. if shipping years from now).

OpenGL being deprecated in Windows?? Got any reference to that? I mean MS
has already shot themselves in the foot with Vista. Do they really wanna
blow off their arms and legs now too?

I mean I realize the stock OpenGL drivers in Vista stuck. But that really
is to be expected as really the video card vendor has to supply
appropriate OpenGL drivers to work with the hardware which all vendors do.

Most high-end 3D CAD/CAM software is all OpenGL. Actually they ALL use
OpenGL. I can only think of a single one that has added support for
DirectX, which is Autodesk Inventor. Not surprising seeing how far up MS'
butt Autodesk's head appears to be. Each release of Inventor adheres
strictly to MS standards: Be slower and more bloated each release and try
t be ridiculous about it.

Made me switch to Solidworks.

But all other vendors, Solidworks, Pro/Engineer, Unigraphics, etc. all use
Solidworks and many are cross platform compatible to Linux. Solidworks
unfortunately isn't though.

So I am not so sure if MS really wants to piss off the worlds largest 3D
CAD/CAM developers by dropping OpenGL support? Especially when most
already have linux solutions available....

On the other hand, maybe they do. I kind of get the odd feeling that
Microsoft is increasingly favoring the mass market and is starting to say
"screw the industrial and commercial market". For me Vista (or the
upcoming 2008 server) is not an operating system that belongs in any
business nor on any server or platform that controls any industrial
process.
:)

There's the matter of how to distribute the load across n threads (or
"whatevers", if you abstract the problem a bit further. PICK used to
use id-hashing to distribute records across a number of pre-allocated
"data buckets", and MS may do something similar when splitting web
cache items across random-named directories.

PICK uses prime number modulos as they clump hashed items less than
the multiples of 2 that MS uses for IE's cache branches. Then again,
in the NTFS age, MS may not be hashing across random-named directories
for speed, but to break predictable paths as a malware protection.


Is memcpy related to strcpy, as in "in null terminators we trust"?
<shudder>

No it is not related to that. =)

memcpy takes a destination pointer, source pointer and a length of how
many bytes to copy and then copies those bytes. It is my responsibility to
ensure though that both buffers are large enough.
Watch that; if you find Core 2 Duo is running things slower than the
older CPU architecture and you're due to ship to an all-Duo
marketspace, you could be performance-under-competitive :)

Not really too worried about it. If I really do need to go multi threaded,
I can do it in about a week or so...
OK... that's interesting, and as you say, unexpected...


Or rather, locked into some sort of real-time synch?

Naw, there's no need to run the Graphics synced to anything. They just run
when Physics and AI don't and squeeze in as many frames as possible in
that time. Some games have the option to choose between syncing to
vertical blank or not but...unless you are on a CRT, there really is no
need to sync to vertical blank.
OK. These days, processing is fast enough to "think" between wave
crests of sound waves, so it makes sense... in ye olde days, we'd have
to prio up for screen refresh, nowdays you can prolly cram a lot of
"thinking" between such realtime demands.

You actually don't need to worry about screen refresh or sound at all. As
far as sound goes, you just simply set up a memory buffer, put your sound
data in it...and tell the API to go play. Done.

And as far as graphics go, same deal. You submit your data to be rendered
to the video card and it renders it. Now until you submit new data...this
is the data that will be shown. So you don't need to manually refresh
every screen update. The video card already does this. So it doesn't
matter if you miss frames.
Hmm... "results are identical on all players systems" could be a
saleability limitation; surely it would be good to say "the game plays
smoothly on slow PCs, but gives a tougher game on faster PCs that
allow us to run deeper AI"?

Difference in PC speeds are an issue. When you play multiplayer games like
supreme commander you will often see games hosted with titles like this:

"2vs2 Dual Core only" =)

Because basically, the game plays for everyone at the speed of the slowest
player.
Ah, that's the multiplayer thing. The slow links between these
systems would also shape the way things have to be done.


Cool! Though the tradeoff may be to limit compatibility to systems
with "enough" graphic memory, maybe? Or does AGP and successors
successfully blur such limits, the way paging hides RAM limits?

Well I only need to draw 2D Colored Polygons. I draw them in 3D Space via
an orthogonal projection matrix as this allows me to offload quite a bit
of work to the GPU (such as client space to screen space translation).
But the actual memory usage and workload on the GPU is very low. So memory
constraints is not something I am worried about unless someone is running
a Riva TNT2. =)

I am not likely to ever even get to use half the memory of one of todays
standard video cards.

Usually the heavy load on video cards when it comes to memory are texture
maps. Texture maps eat huge amounts of memory and I have none of those.
All I have is vertex data.
Is it? Well, that's good news. What about integrated sound? It's
very rare that I see a stand-alone sound card these days, even in PCs
built for gaming. It's only really the sound studio stuff that needs
"special" sound for ASIO performance, independent multiple inputs and
outputs, and better S/N from getting the sound guts out of the case.

Well my integrated sound, though I still prefer and use my Creative Labs
Audigy, supports all sorts of fancy stuff. I am sure hardware decoding is
on the list, though I do admit that I would need to look it up to be sure.
Or having to "support" such titles. Interesting thought; as Linux
suggests itself for embedded/hidden use within dedicated "black box"
devices, is it tempted to pitch as controller OS for set-top players?

Can you elaborate a little more on this? Not entirely sure I 100%
understand the question. =)



--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

(on copying large files from one HD to another:)
Well Linux handles this in a very nice way. It uses almost up to 95% of
all free memory for the file system cache. However, unlike windows, memory
it uses for the file system cache is NOT taken away from applications.
So the memory ... is still available to applications as free memory. The cache
contents are then just simply discarded

I think you'll find Windows has done the same for a while now, perhaps
as far back as when VCache replaced DOS SmartDrv in Windows for
Workgroups 3.11; at that point, memory allocated to cache ceased to be
fixed at runtime, and could dynamically balance with "live" use.

In the Win98 era, the relationship between paging and caching was
improved, so that something that was already cached in RAM didn't have
to be formally paged back into RAM ("in-cache execution").

But there may be other guess-ahead strategies, such as pre-populating
RAM with code and sectors that are expecetd to be used, that may work
well generally, but may "de-optimize" this specific situation.
It's funny how XP's OS footprint alone is roughly the size of my entire RAM
uses with all my applications I use loaded. =)

What matters, is how much of this footprint cannot be paged out of
RAM, e.g. interrupt-servicing code etc.
Well the problem with that is that a file system that can withstand any
extended amount of corruption is just not feasible yet.

That's a cop-out; sure, bullet-proof isn't possible, but how well a
file system approaches that goal is IMO a large factor in how
non-sucky it is. But it's usually "fast, survivable; pick one",
something that includes RAID 0 and RAID 1 as extremes.
All my PCs are self built. The only OEM I use is myself. =)

Yep - I'm an OEM, too ;-)
Converting an Ext2 volume to Ext3 is as easy as mounting it as Ext3. No
conversion actually has to be done...

Well, that's not really conversion, is it? :)

Windows just uses FAT16 and FAT32 without any rigmarole needed to
"mount them as NTFS", though Vista needs to run *on* NTFS.
True enough, I can see what you are saying. Then again though, how much do
you really gain from purging currently unused MFT entries? A megabyte?

Could be massive.

Remember that 150G file mass that was stripped to < 20G?
I suppose if you took a 300 GB volume and resized it to 50 GB then purging
the MFT would be really beneficial but at that point in time, I'd kind of
expect the resize tool to do it.

The resize tools tend to be 3rd-party and less intimate with NTFS's
evolving feature set. I'd rather have a OS-native tool do it.
Yup that is the problem with NTFS support from Linux.

It's also the problem with NTFS support from Windows :-/
I can read NTFS volumes perfectly fine but writing can be dangerous

Yup, Linux never really cracked that one... the Capture project
(wrapping and using Windows' native NTFS code; hullo, lawyers) was
abandoned, and in any case this introduces a common point of failure
with Windows anyway. The only NTFS code that is not based on NTFS.SYS
is BootIt NG's "testing file system..." and some DOS NTFS tools.
Agreed again =)
I see what you are saying but what does grouping files by size really
accomplish? I personally use a mix of large and small files so am not sure
what the advantage would be of grouping them in different areas of the
disk based on their size.

I used that merely as an example to illustrate "hystreisis" - oh
bugger, I snipped it so I have to try and spell it again...

In reality, predictions of what will be used vs. not, or what will
grow vs. what will not, would be more complex than that; may involve
location, content type, what is accessing them, etc. and the general
issue is that the more aware of this a defragger will be, the more
likely you will get "thrashing" as the lines are crossed.

Superior to this sort of automated guesswork is applying manual
separation via partitioning, and then applying settings to re-route
the players who create these files.
...get away from spinning disks and mechanically moving heads!!

Yep - it's a throwback to the mechanical age... but...
Flash based systems don't suffer *any* of those concerns

They do suffer their own ugliness on writes:
- finite write life
- write process is slow
- must write large blocks at a time

So we'd still have to fuss about delayed writes and safety, and trying
to group material to be written so as few large blocks are involved as
possible. It's cheeze not fruit, but still not a free lunch.
Now if the stupid things were easier to produce in 300gb sizes and
wouldn't die so quick...

Ah, yes. We could be running PCs in battery-backed RAM if we were
content to stay with OSs that need less RAM, e.g. Win95. In a sense,
PDA OSs are newly-forged variants of that approach.
Similarly to how my setup is actually. Just without the meaningless drive
letters. ;)

I haven't found my "ABCs" to be a problem since kinnergarten :)

Mind you, I don't find the "modem line noise" (quick to type, hell to
remember; hangover from "in-house only" development?) that passes for
names in Linux to be easier, e.g. devhd0 or whatever.
Not entirely. Compilers these days are smart to where they will try to
combine functions into one if it makes sense to avoid unnecessary function
calls. So smaller functions are often inlined into larger functions.

OK, fair enough... hello, coad bloat :)
Of course if a function is recursive, the compiler sees that and does not
try to inline it. So re-entrant functions aren't a problem. =)

Cool. I'm sure there are overrides etc. at least in the low-level or
C-tradition languages. But you know what they say about relying on
humans to get things right...
Not sure how much of this, if any, the x86 processor does.

It's been a factor since Pentium the First... out-of-order
instruction's here, dunno about un-named registers, or on-the-fly
register renaming. I think that's off the drawing board, too..
processors I work with actually have some parallel code execution ability
built in. In some cases where two subsequent instructions don't access the
same registers they can execute in parallel. It makes a huuuuge speed
difference too. I've really managed to optimize some functions that way.

Yep, that's the original Pentium pipelining thing (i.e. the feature
that really made the Pentium different to the 486).

There's also how to prevent logic branches from emptying the pipeline
stores, and pipeline concurrancy issues. Instructions can be
re-ordered to put "slow to execute" opcodes first, after a pipeline
flush, so the rest of the CPU can back-fill the pipeline (like us
programmers doing "busywork" straight after displaying a dialog, so we
can catch up with the user's "attention pipeline")

Further, you can predict which logic is more likely to be in effect
after the branch (e.g. counting loops loop often, exit once) and
pre-fill the pipeline with what would happen most of the time. If
that's not the branch taken, you'd have to stall, re-fill the pipeline
etc. and at that moment, efficiency drops to 486 levels.

All of which plays hell with compilers that think they control what
opcodes are exceuted when in what registers, and their own
optimization logic as designed for previous cores.
For the most part this holds true for the x86 processor. Though you still
have special purpose registers for floating point, SSE, MMX, and so on.

I think MMX etc. started with "hey, why can't we use these cool 80-bit
FPU registers to haul flat binary bulk around?", then went on to
duplicate these things for special use, so that you didn't have to
choose between "FPU mode" and "MMX mode".

Moore's Law gives you more transistors, but what to do with them?
- more cache
- more computing-about-computing
- more registers
- deeper and more pipelines
- duplication, e.g. avoid FPU/MMX stalls
- multiple cores
- bundle memory management and other off-CPU logic
Plus what is the price difference between the two processor?

At the same GHz, dual-core gets costly. For small extra cost, you can
go dual-core but lose GHz. See the problem?
On a Linux system? Yes.
On a Vista system? May I have more cores please?

From an application standpoint, more than 1 core is a waste for most
applications. Nobody that uses their computer for standard e-mail,
web, basic office products is going to see much benefit

From an OS standpoint though, Vista is such a hog that the more
cores the better due to its love for dozens of background processes.

I think this could apply in particular to the shell, which seems to
have significant per-item baggage when doing bulk file ops.

There are downsides to that, beyond speed.
You mean vertical sync? No I don't worry about vertical sync much. I
actually have very few screen updates. My rendering is actually tied to
the OnPaint event which only gets triggered when data actually changes or
the OS feels the window needs to be redrawn

Oh, OK; not real-time animation, then.
OpenGL being deprecated in Windows?? Got any reference to that? I mean MS
has already shot themselves in the foot with Vista. Do they really wanna
blow off their arms and legs now too?

It's just I've been seeing less attention being paid to OpenGL from
within the XP era; I'm pretty sure there was some "repositioning" by
Windows at that time, and SVGA drivers don't always do much for OpenGL
either. It's definitely a ball to watch, though... in the long term,
you'd expect MS to simplify towards DirectX-only, it's just "when" and
"how hard". Vista uses DirectX natively, which has the same effect as
including FPU in the 486 CPU; sware can count on it being there, and
even "I don't care about games" PCs will have to support it in SVGA.
I mean I realize the stock OpenGL drivers in Vista stuck. But that really
is to be expected as really the video card vendor has to supply
appropriate OpenGL drivers to work with the hardware which all vendors do.

Most PCs don't have SVGA from nVidia or ATi, and Intel aren't all that
serious about SVGA development (witness the tardy driver revisions for
their i740, including poor OpenGL, and no stand-alone attempts at
graphic chipsets since i740, which was bought off-the-peg anyway).
Most high-end 3D CAD/CAM software is all OpenGL. Actually they ALL use
OpenGL. I can only think of a single one that has added support for
DirectX, which is Autodesk Inventor. Not surprising seeing how far up MS'
butt Autodesk's head appears to be.

That I understand, if you are pitching for a niche market that can be
counted on to have these things - like Cubase and sound hardware.
So I am not so sure if MS really wants to piss off the worlds largest 3D
CAD/CAM developers by dropping OpenGL support? Especially when most
already have linux solutions available....

I think they will keep that markey sweet, else Apple may make inroads
there. They may not spread OpenGL support as wide as all Vista,
though; may be an add-on pack (e.g. "free, needs Vista Business or
Premium, not for Home or Starter. must pass WGA").
On the other hand, maybe they do. I kind of get the odd feeling that
Microsoft is increasingly favoring the mass market

MS cares about:
- big OEMs
- big business networks
- "the rest of us", ease-of-use etc.
Naw, there's no need to run the Graphics synced to anything. They just run
when Physics and AI don't and squeeze in as many frames as possible in
that time. Some games have the option to choose between syncing to
vertical blank or not but...unless you are on a CRT, there really is no
need to sync to vertical blank.

Hm. LCDs are still the exception here; still too costly.

Then again, refresh rates only became a headache in CRTs as response
times improved, which is currently a drive in LCD development. So
when LCDs get as responside as CRTs, would we also start to see the
flicker effect when low refresh rates are used?
Difference in PC speeds are an issue. When you play multiplayer games like
supreme commander you will often see games hosted with titles like this:

"2vs2 Dual Core only" =)

Because basically, the game plays for everyone at the speed of the slowest
player.

Ah, OK.
Can you elaborate a little more on this? Not entirely sure I 100%
understand the question. =)

Well, most Linux we use without knowing it; it's often used as a rich,
easy-to-change OS for routers, for example.

So one could "power" a stand-alone CD/DVD/MP3 player by basing it on
Linux, rather than hewing it out of raw hardware alone.

But such things would have to work with DRM-"enhanced" material, so
the Linux devs may have to join the club to play.


-------------------- ----- ---- --- -- - - - -
"If I'd known it was harmless, I'd have
killed it myself" (PKD)
 
S

Stephan Rose

(on copying large files from one HD to another:)



I think you'll find Windows has done the same for a while now, perhaps
as far back as when VCache replaced DOS SmartDrv in Windows for
Workgroups 3.11; at that point, memory allocated to cache ceased to be
fixed at runtime, and could dynamically balance with "live" use.

In the Win98 era, the relationship between paging and caching was
improved, so that something that was already cached in RAM didn't have
to be formally paged back into RAM ("in-cache execution").

But there may be other guess-ahead strategies, such as pre-populating
RAM with code and sectors that are expecetd to be used, that may work
well generally, but may "de-optimize" this specific situation.

Yea but it seems to me that memory Windows allocates for cache cannot be
used by applications whereas memory used by Linux for cache can be used by
applications. Though I could be wrong in which case Microsoft just chooses
a rather poor way to display the information that is misleading.
What matters, is how much of this footprint cannot be paged out of RAM,
e.g. interrupt-servicing code etc.

Very true.
That's a cop-out; sure, bullet-proof isn't possible, but how well a file
system approaches that goal is IMO a large factor in how non-sucky it
is. But it's usually "fast, survivable; pick one", something that
includes RAID 0 and RAID 1 as extremes.

Personally, as long as the file system can survive power outages and such
like that, I am usually pretty happy with it in terms of reliability. Even
the best file system out there is subject to hardware failure at which
point it could be irrecoverable. Backups to remote locations are simply a
must no matter how one slices or dices it. =)
Yep - I'm an OEM, too ;-)


Well, that's not really conversion, is it? :)

That's the beauty of it! No conversion needed. =)
Windows just uses FAT16 and FAT32 without any rigmarole needed to "mount
them as NTFS", though Vista needs to run *on* NTFS.

Well I know how much you like FAT16/32 though personally I don't blame
Vista. I'd prefer NTFS too if I had to choose between those choices.
Journaling is a good thing =)
Could be massive.

Remember that 150G file mass that was stripped to < 20G?

Which 150G file mass?
The resize tools tend to be 3rd-party and less intimate with NTFS's
evolving feature set. I'd rather have a OS-native tool do it.

I'd include the resizing itself in the OS-native category as well
actually. Resizing a partition may mean the need to also move files around
to make contiguous space. That too requires being intimate with the
underlying file-system.
It's also the problem with NTFS support from Windows :-/

That's a first I've heard. What NTFS problems does Windows have?
Yup, Linux never really cracked that one... the Capture project
(wrapping and using Windows' native NTFS code; hullo, lawyers) was
abandoned, and in any case this introduces a common point of failure
with Windows anyway. The only NTFS code that is not based on NTFS.SYS
is BootIt NG's "testing file system..." and some DOS NTFS tools.

Well there supposedly is a driver in existence now for linux that can
write to NTFS volumes and not mess it up. But...I never played with it.
I used that merely as an example to illustrate "hystreisis" - oh bugger,
I snipped it so I have to try and spell it again...

Hysteresis =)
In reality, predictions of what will be used vs. not, or what will grow
vs. what will not, would be more complex than that; may involve
location, content type, what is accessing them, etc. and the general
issue is that the more aware of this a defragger will be, the more
likely you will get "thrashing" as the lines are crossed.

Superior to this sort of automated guesswork is applying manual
separation via partitioning, and then applying settings to re-route the
players who create these files.

Ok I understand you now. =)
Yep - it's a throwback to the mechanical age... but...


They do suffer their own ugliness on writes:
- finite write life
- write process is slow
- must write large blocks at a time

So we'd still have to fuss about delayed writes and safety, and trying
to group material to be written so as few large blocks are involved as
possible. It's cheeze not fruit, but still not a free lunch.

The primary reason why Microsoft's Ready Boost is complete total nonsense. =)

I agree though, Flash isn't really all that much the answer. Though solid
state flash drives do partially solve this problem by using battery backed
RAM as a buffer between the flash and the PC. So that way they can
actually commit the data whenever they feel like it while preserving the
life of the flash. Now if those stupid things weren't so expensive!!!
Ah, yes. We could be running PCs in battery-backed RAM if we were
content to stay with OSs that need less RAM, e.g. Win95. In a sense,
PDA OSs are newly-forged variants of that approach.

You know, that just reminded me. There's some company..I can't recall
which one or even where they are from. It's been a few months since I
heard about them. But they apparently came up with a memory technology
that has all the benefits of RAM *and* is capable of maintaining its
information without power. Just wish I could remember more details...
I haven't found my "ABCs" to be a problem since kinnergarten :)

Mind you, I don't find the "modem line noise" (quick to type, hell to
remember; hangover from "in-house only" development?) that passes for
names in Linux to be easier, e.g. devhd0 or whatever.

/dev/hda? =)

That's now how you, as a user anyway, address drives though. Me typing "cd
/dev/hda" would not work. That is just a drive / partition identifier and
nothing more. The only thing that is concerned about that is the mount
manager that mounts the volume to a directory.

And actually, starting in Ubuntu 7.04 anyway, the mount manager doesn't
even use the /dev/sd? (SATA/SCSI disk) or /dev/hd? stuff anymore (though
it of course still can). It actually uses something much better, which is a
unique identifier that is created for each file system. This means that if
I take my drives, disconnect them, and reconnect them in a different
order, they would still be mounted the same exact way as before.
OK, fair enough... hello, coad bloat :)

Not *necessarily* actually. Inlining a function can actually mean less
code. One of the number one things to be inlined are things like get / set
accessors which do nothing more than provide access to an internal
variable of a class to objects outside the scope of that class.

non-inlined assembly would look roughly something like this for a get
accessor:

- previous code
- call get accessor
- get value
- return value
- continue code

Inlined version:

- previous code
- get value directly from class object
- continue code

Inlining also greatly helps in small tight loops where it can greatly
reduce the number of call & return instructions. Just saving 2
instructions makes a huge difference if said code is executed 1 million
times in a row. =)
Cool. I'm sure there are overrides etc. at least in the low-level or
C-tradition languages. But you know what they say about relying on
humans to get things right...

Well you can't tag a function as "don't inline", though you can tell the
compiler globally you don't want inlining.

You can however tag a function as "inline if possible", but never as
"inline always". Even though there is a __force_inline tag in MSVC++, it
actually doesn't force it. The compiler still retains authority over if it
actually does or does not inline it.
I think MMX etc. started with "hey, why can't we use these cool 80-bit
FPU registers to haul flat binary bulk around?", then went on to
duplicate these things for special use, so that you didn't have to
choose between "FPU mode" and "MMX mode".

Moore's Law gives you more transistors, but what to do with them?
- more cache
- more computing-about-computing
- more registers
- deeper and more pipelines
- duplication, e.g. avoid FPU/MMX stalls - multiple cores
- bundle memory management and other off-CPU logic

True though even one day I think Moore's law is going to run into a big
problem. Eventually we'll end up with the problem that a single atom will
be difficult to define as a transistor =)
At the same GHz, dual-core gets costly. For small extra cost, you can
go dual-core but lose GHz. See the problem?

True but I'd actually still go the 1.6GHz Dual Core over the 2GHz Single
core. Simple reason being that my 2.4 GHz Dual Core wipes the floor with
3.2 GHz P4 =)

At worst, the 1.6 GHz dual core might be just as fast as the 2.0Ghz single
core in some applications. I highly doubt though it'd be slower.

And like I said, with Vista and all the stuff it likes to do in the
background, the Dual core will likely be of far greater benefit.

What good are 400 more MHz when 800 of them are used up by the OS
background processes?
I think this could apply in particular to the shell, which seems to have
significant per-item baggage when doing bulk file ops.

There are downsides to that, beyond speed.


Oh, OK; not real-time animation, then.

Naw, nothing real-time....Yet =) I am playing on maybe doing a game
project one day.
It's just I've been seeing less attention being paid to OpenGL from
within the XP era; I'm pretty sure there was some "repositioning" by
Windows at that time, and SVGA drivers don't always do much for OpenGL
either. It's definitely a ball to watch, though... in the long term,
you'd expect MS to simplify towards DirectX-only, it's just "when" and
"how hard". Vista uses DirectX natively, which has the same effect as
including FPU in the 486 CPU; sware can count on it being there, and
even "I don't care about games" PCs will have to support it in SVGA.

Software can count on OpenGL being there too actually. Even Vista still
has support for it out of the box as does every video card vendor. It
might be crappy support, but it's still there.

And in my case, where my application is a heavy graphics type application,
it isn't too much to expect the user to have a decent graphics card
installed with proper OpenGL support. Joe sixpack isn't exactly my target
audience. =)
Most PCs don't have SVGA from nVidia or ATi, and Intel aren't all that
serious about SVGA development (witness the tardy driver revisions for
their i740, including poor OpenGL, and no stand-alone attempts at
graphic chipsets since i740, which was bought off-the-peg anyway).

Honestly I don't take intel's graphics chipset seriously at all in any way
shape or form. They may be fine for low end desktop systems available at
your local walmart but beyond that...I don't care for them. Intel should
stick to what it does best, make Processors, and leave the video stuff to
nVidia and ATI.
That I understand, if you are pitching for a niche market that can be
counted on to have these things - like Cubase and sound hardware.

Which is precisely my case. =)

Though,
I think they will keep that markey sweet, else Apple may make inroads
there. They may not spread OpenGL support as wide as all Vista, though;
may be an add-on pack (e.g. "free, needs Vista Business or Premium, not
for Home or Starter. must pass WGA").

Naw, all versions of Vista have OpenGL support. OpenGL support really
isn't even an OS thing. Proper OpenGL support is actually provided by the
driver, not by the operating system. When you install say nVidia's driver,
it replaces the OpenGL support with its own for proper operation with the
video card.

OpenGL also does have a few advantages over DirectX:

- Unlike DirectX, it doesn't change everytime a new version comes out. New
features are added, existing features are kept intact. DirectX on the
other hand drastically changes with every release requiring pretty much a
complete rewrite for anyone who wants to support the new version.

- It's exceedingly simpler than DirectX to use and setup. Especially when
compared towards DirectX 10. MS was doing a good job at simplifying the
use of DirectX up until DX10. DX10 ditches the fixed function pipeline
which now means that even the SIMPLEST task requires writing custom vertex
and pixel shaders, and appropriate hardware support, to perform.

- OpenGL is supported on virtually every hardware and operating system in
existance. DirectX is supported on Windows, Windows, Windows, Windows,
Windows and XBox.
MS cares about:
- big OEMs
- big business networks
- "the rest of us", ease-of-use etc.


Hm. LCDs are still the exception here; still too costly.

LCDs an exception and too costly?? I personally would refuse to use
anything other than an LCD. I will not even look at a CRT for more than
10seconds...
Then again, refresh rates only became a headache in CRTs as response
times improved, which is currently a drive in LCD development. So when
LCDs get as responside as CRTs, would we also start to see the flicker
effect when low refresh rates are used?

Actually LCD Refresh rates and response times are perfectly fine I think.
Flicker though no matter what is pretty much a thing of the past. The only
reason you saw flicker in the past was because video cards not having
enough memory to spare for double buffering.

Today, everything is double buffered if not triple buffered. You'll never
see any flicker there as the video card will sync the buffer switch with
vsync without the CPU even needing to worry about it.

Well, most Linux we use without knowing it; it's often used as a rich,
easy-to-change OS for routers, for example.

So one could "power" a stand-alone CD/DVD/MP3 player by basing it on
Linux, rather than hewing it out of raw hardware alone.

But such things would have to work with DRM-"enhanced" material, so the
Linux devs may have to join the club to play.

I'll guarantee you though that if Linux devs do need to implement it,
it'll be implemented in a way it doesn't grind the system to a halt and
it'll be an optional install for those who want it so those who don't want
it can keep it off their system. I personally have no problem voting with
my wallet and not buying DRM Infested content so I don't need a DRM
Infested operating system to play content I won't buy in the first place.

Matter of fact, I'd go out of my way to buy a pirated non-DRM version of
anything before I'd go out and buy a DRM version even if it cost me more.

--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

Yea but it seems to me that memory Windows allocates for cache cannot be
used by applications whereas memory used by Linux for cache can be used by
applications. Though I could be wrong in which case Microsoft just chooses
a rather poor way to display the information that is misleading.

You may see settings to determine how much RAM should be set aside for
cache, but it's more a matter of limiting the normal load balancing
than a hard-wired limit a la SmartDrv.

For example, you may have 256M RAM and start the OS with a memory load
of, say, 72M at boot, and you want to make sure that caching doesn't
use more than 64M RAM (so that it doesn't start using up to 184M and
then have first burp lazy writes back to disk before releasing RAM to
apps that you're trying to load).

In which case, you may use a setting to limit cache to 64M, but that
doesn't mean to say that 64M is locked out of app use, i.e.
permanently reserved for caching.
Personally, as long as the file system can survive power outages and such
like that, I am usually pretty happy with it in terms of reliability. Even
the best file system out there is subject to hardware failure at which
point it could be irrecoverable.

That's the point; a survivable file system would not always be
unrecoverable just because a 512-byte sector got splatted to the wrong
address (e.g. as the result of bad RAM or whatever).

"Vendor vision" = "Well, if the hardware's bad, you can't expect the
OS to work. Not our fault! Not our problem!"

"Depth" = "OK, so what happens if the hardware IS bad and things DO
get corrupted; how do you design to mitigate that?"
Backups to remote locations are simply a
must no matter how one slices or dices it. =)

I don't feel like trusting remote locations to be survivable, maintain
my privacy, etc. thanks (which kiboshes "web apps" that hold your data
on their servers, and "web backup hosting"). If it was my site, and I
knew and managed every wite and transistor between here and there as I
do my own PC, then I may feel differently, but I only have one site...
and a limited monthly "cap", which discourages bulk moves.
Well I know how much you like FAT16/32 though personally I don't blame
Vista. I'd prefer NTFS too if I had to choose between those choices.
Journaling is a good thing =)

Yep, but so is survivability and recoverability. If NTFS blinks, it's
dead, and it can blink in such a way that it crashes NTFS.SYS on
contact. That means no Bart, Linux "capture", WinPE access either;
not even Recovery Console and ChkDsk will run.

Rule #1: Don't eat my data. Fail that, and I don't care about any
other rules you may be better at.
Which 150G file mass?

The real-life example I mentioned a while back. AutoChk ate the whole
goddamn OS subtree on a 160G HD with "just one bad sector" (which was
hidden by firmware and NTFS's on-the-fly "fixing"), converting it to a
bunch of loose "Found*.chk" dirs.

I reassembled the OS subtree, de-malware'd the file set and registry,
then set about restoring bootability. Found I needed to revert a
registry hive to one I'd harvested from \SVI earlier, then merged back
the damaged hive as a .REG (this still worked). All thanks to Bart
CDR boot as a mOS, no thanks to MS's non-existent maintenance tools.

While working on the warranty-replacement 160G, I wanted to save
work-in-progress partition images WITHOUT HAVING TO IMAGE 150G OF
STUFF, so I de-bulked C:, shrunk it to 50G, and created an Extended
partition and logical. I could then stash multiple "undo" C: images
there, and when done, the evacuated bulk was dumped into this space
and re-pointered from the OS, which was now < 20G of stuff... but,
presumably, still lumped with the same bloated/fragmented MFT from
when it held an extra 138G of junk?
That's a first I've heard. What NTFS problems does Windows have?

I've seen a case where an NTFS that passed BING's file system checks
as OK, and which could navigate and provide files via DOS-based
ReadNTFS (a slog, so I didn't try the whole file system!) would throw
an immediate STOP BSoD whenever any sort of NTFS.SYS-dependent OS
attempted to do anything with the HD. Very, very ugly.

MS-DOS 6 introduced "ScanDisk", a user-guided replacement for ChkDsk.
This would look for problems, then stop and prompt you to allow it to
fix, else it would abort. It had a More Info button (yes, it knew
what a mouse was) that would explain what it would do if allowed to
"fix", and it would save a plain-text log to C: when done.

There's still no equivalent for NTFS. You can run ChkDsk with no
parameters, which is safe, but may return false-positive findings if
the volume is in use (and C: is ALWAYS in use). Or you can do a
ChkDsk /F, that will "fix" stuff without warning you, telling you what
it did, etc., with no Undo, and with scanty loggings buried in a
closed system that needs the OS to survive, under some
counter-intuitive name like "WinLogon" or something.

After a bad exit, it's worse; AutoChk is used there, and is hardwired
to operate as ChkDsk /F. Want to check the file system without
"fixing" it and before the OS starts stomping all over it? Can't be
done from HD boot, sorry.

And when it comes to byte-level documentation, or manual
recovery/repair tools like ye olde Norton DiskEdit - nada.
The primary reason why Microsoft's Ready Boost is complete total nonsense. =)

I don't know; I've pondered that since I first heard Jim Achin mention
it - the low transfer rates and limited writability just seemed to
knock it out of contention. I can see one way it could work, and that
is to hold frequently-used code that can be purged out of RAM without
ever having to be written back to storage - which goes for the bulk of
the OS and app code.

The trouble is, you have the one-off overhead of having to populate
the device - but after that, there'd be far less head-wagging (which
is not only slow, but also stops all data flow to/from disk until it's
done). You could end up with heads that hardly have to go back to the
"front" of the volume for original-installed code, and could thus buzz
around the "new file edge" of the file mass.

It doesn't hurt that USB and IDE/S-ATA are different busses :)
/dev/hda? =)

What he said ;-)
That's no(t?) how you, as a user anyway, address drives though. Me
typing "cd /dev/hda" would not work. That is just a drive / partition
identifier and nothing more. The only thing that is concerned about
that is the mount manager that mounts the volume to a directory.

That's a concept new to me - the idea of volumes being "grafted" into
dirs, like a hardware shortcut - and I'm coming to like it. It comes
up with .WIM management as well, and works on FATxx (i.e. doesn't
depend on NTFS's native "junction" or "reparse point" features)
And actually, starting in Ubuntu 7.04 anyway, the mount manager doesn't
even use the /dev/sd? (SATA/SCSI disk) or /dev/hd? stuff anymore (though
it of course still can). It actually uses something much better, which is a
unique identifier that is created for each file system. This means that if
I take my drives, disconnect them, and reconnect them in a different
order, they would still be mounted the same exact way as before.

That's cool! I wish XP/Vista would do that, and I wish XP would
REMEMBER NOT TO ENABLE SYSTEM RESTORE ON THEM.

At least Vista doesn't start puking SR data on every new HD it sees,
which is one way it's safer than XP for at-risk HDs and file systems.
Not *necessarily* actually. Inlining a function can actually mean less
code. One of the number one things to be inlined are things like get / set
accessors which do nothing more than provide access to an internal
variable of a class to objects outside the scope of that class.

non-inlined assembly would look roughly something like this for a get
accessor:

- previous code
- call get accessor
- get value
- return value
- continue code

Inlined version:

- previous code
- get value directly from class object
- continue code

Oh, OK. Kinda like directives gone mad, where what looks like 200
lines of code is actulally 3 pages of compiler settings and 2 opcodes.
Inlining also greatly helps in small tight loops where it can greatly
reduce the number of call & return instructions. Just saving 2
instructions makes a huge difference if said code is executed 1 million
times in a row. =)

Amen! Which brings us to self-modifying code... heh heh, I know
that's highly verboten these days :)
True though even one day I think Moore's law is going to run into a big
problem. Eventually we'll end up with the problem that a single atom will
be difficult to define as a transistor =)

Yup; electricity gets too "lumpy", as do the photons we use to image
the circuit blocks. Big platform shift coming, there.

That's why Core 2 Duo is so impressive; suggests (to investors, for
example) that there's still big medium-term growth ahead.
True but I'd actually still go the 1.6GHz Dual Core over the 2GHz Single
core. Simple reason being that my 2.4 GHz Dual Core wipes the floor with
3.2 GHz P4 =)

That's the problem; you're "moebus-ing the state chart"...

Old dof core New fast core
1 of P4
2 of Core 2 Duo

....with GHz as the Z-axis. So you're juggling two different
parameters (or axes) against GHz; efficiency of the core design, and
number of cores. Sure, best of X and Y is better than worst of X and
Y, even at lower Z, but it that due to X or Y?

I want to see this...

Old dof core New fast core
1 of P4 Celeron-L

....or...

Old dof core New fast core
2 of Pentium D Core 2 Duo

....or if you like...

New fast core
1 of Celeron-L
2 of Core 2 Duo

....as compared on a Z-axis of GHz.
At worst, the 1.6 GHz dual core might be just as fast as the 2.0Ghz single
core in some applications. I highly doubt though it'd be slower.

Pricing would suggest so, but I dunno. Intel's miscalculated before
(Mendocino was too fast for Pentium II/III's comfort) and has tried to
over-sell "it's slower but it's faster" before... remember the days
when they used some sort of in-house performance numbers to "prove"
how even doggy P60 and P66 were "faster" than DX4-100 et al?

I'd love to build one of each :)
And like I said, with Vista and all the stuff it likes to do in the
background, the Dual core will likely be of far greater benefit.

I'm kinda leaning that way, tho one client is very "green" and is
still happy with the speed of his existing PC that I built for him in
the WinME era. So he go for Celeron-L
What good are 400 more MHz when 800 of them are used up by the OS
background processes?

Yup - as long as the OS does use both cores, of course...
Honestly I don't take intel's graphics chipset seriously at all in any way
shape or form. They may be fine for low end desktop systems available at
your local walmart but beyond that...I don't care for them. Intel should
stick to what it does best, make Processors, and leave the video stuff to
nVidia and ATI.

However, they are not only "good enough" for what we generally want to
do (the newest game I play is Quake 3, and raw CPU is enough to make
that work smoothly at 1024x768), they are also a free lunch.

The cheapest add-on SVGA card (with "PoS" written all over it) costs
as much as 512M - 1G RAM, or 500G HD instead of 320G.

The price difference between an Intel mobo with integrated graphics
AND a PCI Express upgrade slot, and the same chipset generation with
no integrated graphics, is zero.

Go figure...
Which is precisely my case. =)

Understood. Perhaps complicated re-draws need a strong GPU, but once
drawn, the static stuff would prolly look much the same either way.
OpenGL also does have a few advantages over DirectX:

- Unlike DirectX, it doesn't change everytime a new version comes out. New
features are added, existing features are kept intact. DirectX on the
other hand drastically changes with every release requiring pretty much a
complete rewrite for anyone who wants to support the new version.
OK...

- OpenGL is supported on virtually every hardware and operating system in
existance. DirectX is supported on Windows, Windows, Windows, Windows,
Windows and XBox.

That I see as a major factor, yes. Not nice if you want to co-develop
for MacOS, Linux and MS, all on the same Intel and GPU.
LCDs an exception and too costly?? I personally would refuse to use
anything other than an LCD. I will not even look at a CRT for more than
10seconds...

Hint: Change the default refresh rate from 60Hz :)

Nah, I refuse to pay as much as a 320G HD just to get one square meter
of desk back, and be stuck with a screen that's too bright, and is
hardwired to a set resolution (and looks crap at any other res).

Next!
Actually LCD Refresh rates and response times are perfectly fine I think.
Flicker though no matter what is pretty much a thing of the past. The only
reason you saw flicker in the past was because video cards not having
enough memory to spare for double buffering.

No, refresh rate flicker is where fast-fade phosphors meet slow
refresh rates - so that a large chunk of the duty cycle drops to a
higher contrast level of darkness.

LCDs are no longer so awful that you have to have the mouse pointer
leave a trail of turds to see where it is, but I dunno how good it is
at things like scrolling or spin-the-room 3D game movement.

I have a feeling that when they fix that, they may un-fix "flicker
free", unless the time-chart mechanics are different - as they may be.
I personally have no problem voting with my wallet and not buying
DRM Infested content

Because of DVD region apartheid, I buy about a third of the DVDs a
year that I would do otherwise.

I don't shop for DVDs here; I don't have the time or the interest.
But I love travelling, and I shop when I travel, especially when
cooped up in airports. I regularly see stuff I want, but if I'm not
certain it will work in my regionalized ghetto, I do without.

Yes, I could swot up crackware, but frankly, I can't be assed.

Audio CDs from Sony, I don't buy. If Sony were a person caught
rootkitting CDs just for the hell of it, they'd be off the map in jail
for years, or at least denied access to PCs and coding for years.

The courts couldn't impose that sentence, but I can, where my spend is
concerned. I don't buy what seems to be the logic in US courts, that
as long as your motivation is the making of money, it's OK.




---------- ----- ---- --- -- - - - -
Don't pay malware vendors - boycott Sony
 
S

Stephan Rose

Sorry for answering a little late...been sick the past few days, haven't
been going through posts as much, particularly the longer ones. =)
I don't feel like trusting remote locations to be survivable, maintain
my privacy, etc. thanks (which kiboshes "web apps" that hold your data
on their servers, and "web backup hosting"). If it was my site, and I
knew and managed every wite and transistor between here and there as I
do my own PC, then I may feel differently, but I only have one site...
and a limited monthly "cap", which discourages bulk moves.

All my remote sites are actually owned by me, not a 3rd party. =)

Though, I will, probably in the near future, have to put up some 3rd party
severs because for security reasons they will need to be in geographically
vastly different regions. Like US east coast VS west coast type different
regions.
Yep, but so is survivability and recoverability. If NTFS blinks, it's
dead, and it can blink in such a way that it crashes NTFS.SYS on
contact. That means no Bart, Linux "capture", WinPE access either;
not even Recovery Console and ChkDsk will run.

Rule #1: Don't eat my data. Fail that, and I don't care about any
other rules you may be better at.


The real-life example I mentioned a while back. AutoChk ate the whole
goddamn OS subtree on a 160G HD with "just one bad sector" (which was
hidden by firmware and NTFS's on-the-fly "fixing"), converting it to a
bunch of loose "Found*.chk" dirs.

I reassembled the OS subtree, de-malware'd the file set and registry,
then set about restoring bootability. Found I needed to revert a
registry hive to one I'd harvested from \SVI earlier, then merged back
the damaged hive as a .REG (this still worked). All thanks to Bart
CDR boot as a mOS, no thanks to MS's non-existent maintenance tools.

While working on the warranty-replacement 160G, I wanted to save
work-in-progress partition images WITHOUT HAVING TO IMAGE 150G OF
STUFF, so I de-bulked C:, shrunk it to 50G, and created an Extended
partition and logical. I could then stash multiple "undo" C: images
there, and when done, the evacuated bulk was dumped into this space
and re-pointered from the OS, which was now < 20G of stuff... but,
presumably, still lumped with the same bloated/fragmented MFT from
when it held an extra 138G of junk?

Ahh ok, gotcha. =)

I don't know; I've pondered that since I first heard Jim Achin mention
it - the low transfer rates and limited writability just seemed to
knock it out of contention. I can see one way it could work, and that
is to hold frequently-used code that can be purged out of RAM without
ever having to be written back to storage - which goes for the bulk of
the OS and app code.

The trouble is, you have the one-off overhead of having to populate
the device - but after that, there'd be far less head-wagging (which
is not only slow, but also stops all data flow to/from disk until it's
done). You could end up with heads that hardly have to go back to the
"front" of the volume for original-installed code, and could thus buzz
around the "new file edge" of the file mass.

True I suppose that might work as long as it doesn't need to read more
than the bandwidth will provide at any given point in time.

Personally though, I still not needing virtual memory in the first place
to still be the best solution. =)

On that note...going to be adding 2 more gigs of RAM to my system soon.
That will put my total of 4 gigs out of the 8 gigs my motherboard can
support. =)
It doesn't hurt that USB and IDE/S-ATA are different busses :)


What he said ;-)


That's a concept new to me - the idea of volumes being "grafted" into
dirs, like a hardware shortcut - and I'm coming to like it. It comes
up with .WIM management as well, and works on FATxx (i.e. doesn't
depend on NTFS's native "junction" or "reparse point" features)


That's cool! I wish XP/Vista would do that, and I wish XP would
REMEMBER NOT TO ENABLE SYSTEM RESTORE ON THEM.

At least Vista doesn't start puking SR data on every new HD it sees,
which is one way it's safer than XP for at-risk HDs and file systems.

And System Screw up is such a waste of space too. If I look at the number
of restore points XP has on this system it's just stupid.

Understood. Perhaps complicated re-draws need a strong GPU, but once
drawn, the static stuff would prolly look much the same either way.

Actually my overhead is fixed, re-draws don't cost me less. The static
stuff usually is the heavier load actually. The only changing content is
the action that the user is currently performing which usually is a very
extremely light rendering load. Only exception would be if the user
selects a large portion of, if not all, elements.

I render selected elements twice, static render and then the selected
action render, because that way I don't need to care about state changes
if an element is selected or not. It's more expensive as it doubles the
rendering cost for each selected element instead of flagging said element
as selected, but spares me much maintenance work.

And I've always found it annoying in many apps when you see stuff
"selected" that isn't because of a flaw in the maintenance logic that
doesn't always correctly clear the selected bits.
That I see as a major factor, yes. Not nice if you want to co-develop
for MacOS, Linux and MS, all on the same Intel and GPU.


Hint: Change the default refresh rate from 60Hz :)

60Hz is beyond anything the eye can perceive. Why go faster?
Nah, I refuse to pay as much as a 320G HD just to get one square meter
of desk back, and be stuck with a screen that's too bright, and is
hardwired to a set resolution (and looks crap at any other res).

Well I personally love my 20.1 inch LCD and like I said, I would refuse to
look at a CRT. I will get massive headaches from even looking at a CRT for
more than 10 minutes. Brightness can be adjusted btw =)

And as far as resolution goes, the more the better. I consider 1280x1024
small! Actually the 1600x1200 on this LCD is starting to bug me too...

Seriously considering trying to find a ????x1200 widescreen to get more
horizontal resolution.
Next!


No, refresh rate flicker is where fast-fade phosphors meet slow
refresh rates - so that a large chunk of the duty cycle drops to a
higher contrast level of darkness.

LCDs are no longer so awful that you have to have the mouse pointer
leave a trail of turds to see where it is, but I dunno how good it is
at things like scrolling or spin-the-room 3D game movement.

On el-cheapo walmart LCDs? Crappy.
On decent high quality LCDs that aren't really all that much more
expensive? Absolutely flawless. I do gaming on this LCD without any kind
of trouble, no trails, etc.
I have a feeling that when they fix that, they may un-fix "flicker
free", unless the time-chart mechanics are different - as they may be.


Because of DVD region apartheid, I buy about a third of the DVDs a
year that I would do otherwise.

Well that's why my PC is my DVD player. I have no choice in many of the
DVDs I own. If I want Hamasaki Ayumi's latest concert DVD, there is only
one place in the world and one region I will get it from: Japan

Leaves me with little choice. =)
I don't shop for DVDs here; I don't have the time or the interest.
But I love travelling, and I shop when I travel, especially when
cooped up in airports. I regularly see stuff I want, but if I'm not
certain it will work in my regionalized ghetto, I do without.

Yes, I could swot up crackware, but frankly, I can't be assed.

Audio CDs from Sony, I don't buy. If Sony were a person caught
rootkitting CDs just for the hell of it, they'd be off the map in jail
for years, or at least denied access to PCs and coding for years.

The courts couldn't impose that sentence, but I can, where my spend is
concerned. I don't buy what seems to be the logic in US courts, that
as long as your motivation is the making of money, it's OK.

Yea it's funny what corporations with lots of cash can get away with...





--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

All my remote sites are actually owned by me, not a 3rd party. =)

Cool! Not attainable for most of us, heh.
Ahh ok, gotcha. =)

Not a special case - applies wherever folks delete stuff with the
expectation of improved performance (all the more so when everything
is in the same "engine room" C:).

The trend in Windows has been to slow down or even reverse "bit rot"
(at least at the performance level), so that an installation will have
the same performance after years of use as when installed.

In this context, being unable to reverse accumulated MFT baggage
represents a point of failure in this quest.
On that note...going to be adding 2 more gigs of RAM to my system soon.
That will put my total of 4 gigs out of the 8 gigs my motherboard can
support. =)

Vista? If so, Vista32 or Vista64?
60Hz is beyond anything the eye can perceive. Why go faster?

Because short-fade screen elements will spend more of the time between
pulses in a darker state, increasing the depth of "noise" that you
will see as flicker, especially in your peripheral vision.

Central vision (what you "look at") is mainly cones, which are
color-discriminating, hi-res, but slow-responding sensors.

Peripheral vision is mainly rods, which are fast-reacting, depletable,
low-res, monochrome sensors.

Selection pressure may have shaped this design as follows; peripheral
vision detects movement (e.g. a charging predator), you look in that
direction, and central vision resolves more detail for the brain to
process (e.g. is this a tree, food, or predator?).

Brightness is like volume; the mind wants more of it. Louder is
better, brighter is better. So usually, the monitor is brighter than
its surroundings; often set so bright that the "black" glows through.

The result contributes to eyestrain and fatigue. Look away from the
screen to your keyboard, or paper you are writing on, and your
too-bright screen will become an annoying flickering distraction.

However, I can see the difference between 60Hz and 70Hz even when
looking straight at the monitor, so I contend that "60Hz is too fast
to perceive" claim, even for slower central vision.

Only if I'm too close to a large screen (so the edges of it are
outside my slow central vision) can I really see the difference
between 70Hz and 75Hz or 85Hz. The difference is apparent when
looking away (as an edge-of-vision flicker) and looking straight at
the screen, there's a sense that 85Hz looks more "solid". YMMV.
Well I personally love my 20.1 inch LCD and like I said, I would refuse to
look at a CRT. I will get massive headaches from even looking at a CRT for
more than 10 minutes. Brightness can be adjusted btw =)

Sure, and that's the point I'm making. If you leave a CRT on full
brightness at low refresh rate, of course it will suck.
On decent high quality LCDs that aren't really all that much more
expensive? Absolutely flawless. I do gaming on this LCD without any kind
of trouble, no trails, etc.

If you look at LCD specs, the one that's most often quoted as a
quality descriminator is response time.


------------ ----- ---- --- -- - - - -
Our senses are our UI to reality
 
S

Stephan Rose

Cool! Not attainable for most of us, heh.

Good point =)
Not a special case - applies wherever folks delete stuff with the
expectation of improved performance (all the more so when everything
is in the same "engine room" C:).

The trend in Windows has been to slow down or even reverse "bit rot"
(at least at the performance level), so that an installation will have
the same performance after years of use as when installed.

In this context, being unable to reverse accumulated MFT baggage
represents a point of failure in this quest.

That's one of the nicer things of Ext3 then. Once the file system is
created it's done! It doesn't degrade through extended use or accumulate
baggage. =)
Vista? If so, Vista32 or Vista64?

Ubuntu =)
Because short-fade screen elements will spend more of the time between
pulses in a darker state, increasing the depth of "noise" that you
will see as flicker, especially in your peripheral vision.

Central vision (what you "look at") is mainly cones, which are
color-discriminating, hi-res, but slow-responding sensors.

Peripheral vision is mainly rods, which are fast-reacting, depletable,
low-res, monochrome sensors.

Selection pressure may have shaped this design as follows; peripheral
vision detects movement (e.g. a charging predator), you look in that
direction, and central vision resolves more detail for the brain to
process (e.g. is this a tree, food, or predator?).

Brightness is like volume; the mind wants more of it. Louder is
better, brighter is better. So usually, the monitor is brighter than
its surroundings; often set so bright that the "black" glows through.

The result contributes to eyestrain and fatigue. Look away from the
screen to your keyboard, or paper you are writing on, and your
too-bright screen will become an annoying flickering distraction.

However, I can see the difference between 60Hz and 70Hz even when
looking straight at the monitor, so I contend that "60Hz is too fast
to perceive" claim, even for slower central vision.

You can see it on a CRT because a CRT cannot illuminate all pixels at
once. An LCD on the other hand can since the backlight provides the light
and the pixels can maintain their color state via a small built in
capacitor until their next refresh cycle. So a pixel will never go *off*
whereas on a CRT, by the time the beam reaches the bottom, the top is
already dark again.

That is why refresh rates don't matter on the LCD beyond updating the
image fast enough so the eye can't follow with changes to the image.

CRTs also, for the same reason, suffer with problems related to frequency
differences between their refresh rate and room lighting. That's why I get
headaches looking at them.
Only if I'm too close to a large screen (so the edges of it are
outside my slow central vision) can I really see the difference
between 70Hz and 75Hz or 85Hz. The difference is apparent when
looking away (as an edge-of-vision flicker) and looking straight at
the screen, there's a sense that 85Hz looks more "solid". YMMV.


Sure, and that's the point I'm making. If you leave a CRT on full
brightness at low refresh rate, of course it will suck.



If you look at LCD specs, the one that's most often quoted as a
quality descriminator is response time.


Yep, response time is important. That's why I'd never buy a cheap LCD
with a slow response time. =)

The other thing I look at when buying LCDs is viewing angle. 160 degrees
minimum, 170 degrees preferred.


--
Stephan
2003 Yamaha R6

å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰
 
C

cquirke (MVP Windows shell/user)

On Sat, 07 Jul 2007 10:25:52 -0500, Stephan Rose
You can see it on a CRT because a CRT cannot illuminate all pixels at
once. An LCD on the other hand can since the backlight provides the light
and the pixels can maintain their color state via a small built in
capacitor until their next refresh cycle. So a pixel will never go *off*
whereas on a CRT, by the time the beam reaches the bottom, the top is
already dark again.

Ah, that may be the crux, as LCD pixels are "dark" when triggered (and
stay set until next pass) with a constant backlight (which is often
too dark to give a true non-glowing black)
CRTs also, for the same reason, suffer with problems related to frequency
differences between their refresh rate and room lighting. That's why I get
headaches looking at them.

Yep, and that may be worse in the US, as 60Hz is closer to refresh
rates than 50Hz. The effect is; "I'm sure I set the rate, why does it
look like 60Hz? Oh, I see it's already on 75Hz. Strip lights?"
Yep, response time is important. That's why I'd never buy a cheap LCD
with a slow response time. =)
The other thing I look at when buying LCDs is viewing angle. 160 degrees
minimum, 170 degrees preferred.

OK. Something else I forget to worry about, on CRTs.

If LCDs were same price, I'd accept thier limitations and buy them for
space convenience over CRTs. Strip lighting sucks (both electronic
and audible noise), tho CFT replacements for filament bulbs are nice,
especially the "warm" ones.

I'd like CFT downlights, rather than the hot-and-hungry halogen stuff.


-------------------- ----- ---- --- -- - - - -
Tip Of The Day:
To disable the 'Tip of the Day' feature...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top