S
Stephan Rose
If those were created in a "linear" way, then the process should be
fairly "smooth" from one HD to another, tho back-and-forth thrashing
would be inevitable if the destination and target were different
volumes on the same physical HD.
Yea it was from one HD to another. Though even if I move such data on the
same HD, I have by far less head thrashing than I do under windows.
Virtually none actually...
I asked, because I was wondering about possible interface intelligence
that SCSI and perhaps S-ATA/NLQ are reputed to have. That may clump
any OS burps that may pop up during the copy ;-)
Is this a system with enough RAM to prevent paging?
I could probably turn off my swap partition and wouldn't even notice.
So yes =)
I'm pretty sure linear addressing (LBA) will fill cylinders before
clicking heads forward. A "cylinder" being the logical construct of
all tracks on all platter surfaces that can be accessed without moving
the heads - that could span physical HDs in RAID 0, for example.
Exceptions might arise when HD firmware (or OS "fixing") relocates
failing sectors (or clusters) to OK spares. Shouldn't really be an
active "feature" in a healthy setup, tho.
Very good point, I didn't even think about that.
NTFS duplicates the first few records of the MFT, and that's it. Not
sure if that includes the run start/length entries for all files,
which would be NTFS's equivalent to FAT1+FAT2.
It sounds as if Ext2/3 does something similar, depending on just how
far "defines the file system itself" takes you.
The superblock basically, first 1kb of the partition that defines how the
file system is structured.
Depends on how the NTFS was created, perhaps - i.e. formatted as NTFS,
or converted from FATxx, etc. I know there's an OEM setup parameter
to address this, which sets aside space for an eventual NTFS MFT after
the initial FATxx file system is converted later. I think this is to
cater for a "FAT32, install/image OS, convert NTFS" build method that
old build tools may require some OEMs to use.
I don't think I have converted a file system from FAT32 to NTFS since 1995 =)
Also, not everything that is "can't move" is MFT; some may be
pagefile, etc. Not sure how Diskkeeper shows this info...
It separates them into different category. MFT gets its own color.
Directory files get their own color. Pagefile get its own color. Other
non-movable stuff gets its own color.
It does a pretty nice job, I was happy with it under windows. I am still
glad I don't really need it anymore though. =)
Yup. Also, does defragging purge old entries?
No it doesn't. A defragger has no business really modifying the file
system contents itself such as shrinking the MFT in my opinion. It should
defrag and that is it. Shrinking the MFT should be done via a separate
tool in my opinion.
Not even sure if there is such a tool for NTFS.
I kinda like the old FATxx design, for one particular reason; it's
easy to preserve and re-assert the file system structure in data
recover scenarios, if you know exactly where it will be.
I'm not sure how well Ext2/3 does if the file system is trashed and needs
to be repaired. I do know this though, in Ext3 file undeletion is
impossible as it zeroes out the block pointers in the inode for
reliability reasons in the event of a crash.
Hmm... that sounds like a missed "scalability" opportunity to me.
It has major performance advantages though. Something that doesn't need to
be created doesn't take time to create it. I think in the days of hard
disks with less than 1 gigabyte there were very rare occurrences of the
Ext2 limits being reached. But today? Ext3 can handle volumes up to 32
Terabytes and the associated file volume one could expect from that much
data storage.
So why spend time constantly resizing the MFT, and subsequently introduce
a potential failure point in the file system, when there is no need to?
Defrag = "duty now for the future", where the intention is to waste
time defragging at times the user isn't trying to work. Underfootware
defragging is a Vista idea that I have yet to warm to.
Agreed, it's so nice not to need that.
What's more of a problem is the thrashing that can happen when the
status of files is percieved to change. An overly-sensitive
"frequently-used" logic could cause that, so you need a bit of
hysteriesis (spelling?) to lag that from bouncing around.
Heuristics
Logically, all surfaces, heads and tracks of "the disk" are considered
as one, and (hopefully) addressed in a sequence that minimizes head
travel. This logic has been present since DOS first dealt with
double-sided diskettes, where they are filled in sector, head, track
order (rather than the sector, track, head order you intuit).
My first disk system didn't have that logic; it treated each side of a
diskette as a separate disk drive! That was a home-cooked add-on for
the ZX Spectrum, which I worked with quite a bit.
Hey, get two disks for the price of one
Which means if you want to concentrate travel within a narrow band,
then leaving it to defrag isn't the hot setup - better to size
partitions and volumes and apply your own control to what goes where.
Which is why I absolutely love not having drive letters under linux. =)
I could go as far as creating a set of directories, each mounted to a
different partition, to categorize my data and impose limits of how much
of said data I want to be able to store. That way I can always guarantee
there will be X amount of space in a certain directory if I have a need
for that.
Try that under windows...you'd end up with a sea of meaningless
drive letters.
Else, as far as I can see, you will always have a shlepp from the
old/fast code at the "front" to the new stuff at the "end", The fix
for that is to suck out dead stuff from the "middle" and stash it on
another volume, then kick the OS's ass until it stops fiddling with
this inactive stuff via underfootware processes.
That includes the dead weight of installation archives, backed-up old
code replaced by patches, original pre-install wads of these patches,
etc. which MS OSs are too dumb to do. A "mature" XP installation can
have 3 dead files for every 1 live one... nowhere else would that sort
of inefficiency be tolerated.
I think mine in the office is up to a ratio of 5:1 =)
Sure, fair enough. By now, one might expect apps doing "heavy things"
to spawn multiple threads, and Vista's limits on what apps can do
encourages this, i.e. splitting underfootware into parts that run as
service and as "normal app" respectively.
These days, you need a spare core just to run all the malware ;-)
Hahahaha! I like it =)
Yup, no lie there. I think the compiler will take care of some
details, as long as you separate threads in the first place.
Actually no, not really it doesn't. The responsibility that my multi
threaded code works rests all on me. The compiler gives me absolutely no
aid in that regard. All existing programming languages essentially lack
the ability to properly define a multi-threaded application. The compiler
isn't even aware that my app is multi threaded. To it, each thread is just
another function. The only thing that makes it multi threaded are the
calls I make to the OS to tell it which functions of my code I'd like to be
spawned on its own thread.
Microsoft's .Net Framework does give *some* aid in multi-threading during
runtime, such as modifying properties of some windows controls will throw
an exception if it's not thread-safe. But these are all runtime checks.
The compiler still merrily compiles it all without giving an error.
AFAIK NT's been multi-core-aware for a while, at the level of separate
CPUs at least. Hence the alternate 1CPU / multi-CPU HALs, etc.
Not sure how multiple cores within the same CPU are used, though - it
may be that the CPU's internal logic can load-balance across them, and
that this can evolve as the CPUs do. I do know that multi-core CPUs
are expected to present themselves as a single CPU to processes that
count CPUs for software licensing purposes.
It's up to the operating system to load balance the CPU. The CPU itself
can do very little in that regard. If it gets a piece of code to execute,
it can't say I am going to execute half on one core, half on the other as
the second half may depend on the results of the first half so it cannot
be executed in parallel.
There are things each core does on its own to attempt to improve
performance, but load-balancing across the cores is an operating system
task. And the best the OS can do is spread active threads across the cores.
It's been said that Vista makes better use of multiple cores than XP,
but often in posts that compare single-core "P4" core processes with
dual-core "Core 2 Duo" processors at similar GHz, without factoring in
the latter's increased efficiency per clock.
So they may in fact only be telling us what we already know, i.e. that
the new cores deliver more computational power per GHz.
No, I can see that being an example where multiple cores would help;
background pagentaion, spell-checking, and tracking line and page
breaks, for example - the stuff that makes typing feel "sticky". Not
to mention the creation of output to be sent to the printer driver,
background saves, etc. Non-destructive in-place editing can be a
challenge, and solutions often involve a lot of background logic.
Is any of that seriously an issue still though with today's processing
power on just a single core? I don't think a single letter felt "sticky"
writing this post. =P
Though I can see running things like spellcheck, and they probably already
are, on a separate thread. Not as much for performance reasons, but simply
because that is one of those programming problems where multi-threading
makes things sooooooo much easier for a change! Implementing a background
spellcheck without multi-threading would be a nightmare.
Still doesn't really need multiple cores per se though to do it unless
maybe it has a fresh 1,000 page book to chew through I suppose.
Some of that challenge applies to any event-driven UI, as opposed to
procedural or "wizard" UIs. IOW, solutions to that (which unlink each
action from the base UI module, and sanity-check states between the
various methods spawned) may also pre-solve for multi-core.
Sure. I used to "hide" some overhead by doing some processing
straight after displaying a UI, on hte basis that the user will look
at it for a while before demanding attention (this is back in the PICK
days) but folks familiar with their apps will type-ahead and catch you
out. There are all sorts of ways to go wrong here, e.g...
- spawn a dialog with a single button on it called OK
- when presses OK, start a process
- replace that button with one to Cancel
- when the process is done, report results
- relace that button with one for OK to close the dialog
Yup, I have done stuff like that before quite frequently. =)
I think true multicores may be more "transparent" that HyperThreading,
in that HT prolly can't do everything a "real" core can. So with HT,
only certain things can be shunted off to HT efficiently, whereas a
true multicore processor will have, er, multiple interchangeable cores
Bascially that's correct. =)
There's no doubt that "computing about computing" can pay off, even in
this age of already-complex CPUs. The Core 2 Duo's boost over
NetBurst at lower clock speeds is living proof of that, and frankly, I
was quite surprised by this. I thought that gum had had all its
flavor chewed out of it; witness the long-previous "slower but faster"
processors from Cyrix, and AMD's ghastly K5.
I used to have a Cyrix once!!
Isn't this pretty much what Windows' internal messaging stuff does? I
know this applies to inter-process comms, I just don't know whether
the various bits of an app are different processes at this level.
Perhaps some of this logic can be built into the standard code and UI
libraries that the app's code is built with?
Windows' internal messaging stuff is a single-threaded message
pump. Nothing more than a simple FIFO. Even in a multi threaded
application, the message pump is still a single threaded FIFO.
This becomes very evident when writing .Net based applications as many
controls will whine and moan if they are modified from a thread other than
the thread they were created on. If windows would distribute events across
multiple threads those controls would never shut up! =)
The problem is race conditions between the two processes. If you did
spawn one process as a "loose torpedo", the OS could pre-emptively
time-slice between it and it could come to pass that the two halves
wind up on different cores.
My multi threading problem actually is not really a race condition.
Currently my engine is setup as follows:
- One buffer for vertex data
- One buffer for color data
These two buffers are passed to OpenGL as vertex and color arrays
respectively as what I primarily display are just colored polygons. No
textures needed. The buffers each have a fixed size.
The rendering loop then goes to each object and queries it for its vertex
and color data, and this data is then added to the respective buffers.
When the buffers are full, they are submitted to the video card for
rendering and the engine goes to chew on the next set of object while the
video card is now busy processing the data I sent to it.
So I do have some level of multi threading here in terms of offloading
processing to the GPU.
That setup works really great for a single threaded rendering.
Multi threaded rendering though would require a much different approach.
Especially since OpenGL does not like multi threading much. All OpenGL
calls have to be made from the same thread the OpenGL context was created
on.
That means I'd need multiple geometry data buffers (one set of buffers per
thread) and each thread would need the ability to submit its geometry data
to the main application thread for submission to the GPU. At the same time
though, the loop in the main application thread has to be in such a way
that it doesn't consume CPU resources. It has to be able to go to sleep
while it waits on the other threads and it also has to instantly wake up
when another thread needs it to do something.
It's all doable and it's all a big pain in the butt to implement!!
I'd much rather try to offload more processing to the already yawning and
very bored GPU instead. But it's hard to offload memcpy, where over 90% of
my load is, to the GPU!
Heh heh ;-)
I think modern games are often built on engines; specifically, an
engine to do graphics and another for sound. The AI behind the
character's activities can be split off as an asynchronous thread,
whereas sound and graphics have to catch every beat.
Actually it is the exact opposite way around!
Physics and AI are on such a tight leash it isn't even funny. Especially
true for multi-player games. But even in single player games, they
generally run on a very exact and tight schedule. 30 Frames per second is
a popular number I think for AI and Physics.
Graphics and sound on the other hand can be, and usually are,
asynchronous. Rendering and sound decoding / output is all handled by
hardware anyway. All the software has to do is keep the feeding the
hardware with data before it runs out of stuff to do, especially in regard
to sound. That doesn't require any overly precise timing.
AI and Physics on the other hand, if they don't occur on a fixed framerate
then all sorts of weird and bad things happen. Especially in multiplayer
games, they absolutely have to, under all conditions, run at a fixed rate
so that all results are identical on all players systems.
Supreme commander, just as one example, runs a complete simulation of the
entire battlefield on *every* players computer and compares them to one
another. If only one player's data does not match all other player data in
any way it's considered a desync and the game is aborted.
Graphics a purely a secondary consideration. An annoying side task that
oddly enough has to be done to keep the player happy
It may help at the OS level, e.g. when copying files, the av scanner
coukld be running in the "spare" core. I think this was a large part
of the thinking behind HT... also, there will be network packets to be
batted off by the firewall, etc. and that's a coreful of stuff, too.
Well, perhaps not a complete corefull, but YKWIM.
Real-time stuff like MP3 playback (something else that may be built
into games, with the compression reducing head travel) can also
benefit from having a spare core to process them.
I don't think there is a single soundcard these days that can't decode MP3
in hardware. =) CPU has absolutely nothing to do there hehe.
And then there's all the DRM trash to compute, too...
Only if you're running Vista!!
--
Stephan
2003 Yamaha R6
å›ã®ã“ã¨æ€ã„出ã™æ—¥ãªã‚“ã¦ãªã„ã®ã¯
å›ã®ã“ã¨å¿˜ã‚ŒãŸã¨ããŒãªã„ã‹ã‚‰